Collaborating Authors


Adobe Research Proposes HDMatt, A Deep Learning-Based Image Matting Approach


Image matting is an essential technique to estimate the foreground objects in images and videos for editing and composition. The conventional deep learning approach takes the input image and associated trimap to get the alpha matte using convolution neural networks. But since the real-world input images for matting are mostly of very high resolution, such approaches efficiency suffers in real-world matting applications due to hardware limitations. To address the issue mentioned above, HD-Matt, the first deep learning-based image matting approach for high-resolution image inputs, is proposed by a group of researchers from UIUC (University of Illinois, Urbana Champaign), Adobe Research, and the University of Oregon. HD-Matt works on the'divide-and-conquer' principle.



Nearly 100 years ago, the word "robot" was invented by the Czechoslovak brothers Karel and Josef Čapek. The word appeared for the first time in Karel's theatre play titled R.U.R. in 1920. The play is about humanoid robots who seem happy to work for humans at first, but later a robot rebellion leads to the extinction of the human race. The play achieved a fast international success when it was performed not only in Prague but also in London, New York or Chicago. Karel Čapek was one of the first people who thought of a potential threat if machine-robot inventions happen too fast or without a regulation.

All-women team in Bharat helps world adopt AI


The Chicago Cubs won the US Major League Baseball World Series title in 2016, its first win in 108 years. The LA Dodgers reached the 2017 World Series final, before losing in a game tainted by a cheating scandal. What the two teams shared in their dream runs was use of AI. Florida-based Kinatrax had high-speed cameras installed at strategic points on baseball grounds for synchronized motion-capture videos of pitchers. These were annotated, tagged and analysed to create the 3D anatomical models that fine-tuned pitching mechanics for each player.

AI could help rid health care of biases. It also might make them worse


Hospitals and health care companies are increasingly tapping experimental artificial intelligence tools to improve medical care or make it more cost-effective. At best, that technology has the potential to make it easier to detect and diagnose diseases, streamline care, and even eliminate some forms of bias in the health care system. But if it's not designed and deployed carefully, AI could also perpetuate existing biases or even exacerbate their impact. "Badly built algorithms can create biases, but well-built algorithms can actually undo the human biases that are in the system," Sendhil Mullainathan, a computational and behavioral science researcher at the University of Chicago's Booth School of Business, told STAT's Shraddha Chakradhar at the STAT Health Tech Summit this month. Mullainathan also spoke with STAT about about the importance of communication in developing AI tools, the data used to train algorithms, and how AI could improve care.

The Future of Artificial Intelligence


"[AI] is going to change the world more than anything in the history of mankind. AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018 In a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but growing crew of IFM/Onetrack.AI have one rule that rules them all: think simple. The words are written in simple font on a simple sheet of paper that's stuck to a rear upstairs wall of their industrial two-story workspace. Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi punches some keys on a laptop to pull up grainy video footage of a forklift driver operating his vehicle in a warehouse. It was captured from overhead courtesy of a Onetrack.AI "forklift vision system." Employing machine learning and computer vision for detection and classification of various "safety events," the shoebox-sized device doesn't see all, but it sees plenty. Like which way the driver is looking as he operates the vehicle, how fast he's driving, where he's driving, locations of the people around him and how other forklift operators are maneuvering their vehicles. IFM's software automatically detects safety violations (for example, cell phone use) and notifies warehouse managers so they can take immediate action. The main goals are to prevent accidents and increase efficiency. The mere knowledge that one of IFM's devices is watching, Gyongyosi claims, has had "a huge effect." "If you think about a camera, it really is the richest sensor available to us today at a very interesting price point," he says. "Because of smartphones, camera and image sensors have become incredibly inexpensive, yet we capture a lot of information.

News at a glance


SCI COMMUN### Astronomy Talk about a sharper image: A recently constructed imaging sensor array (above) that will be used when the Vera C. Rubin Observatory in Chile opens in 2021 has captured a world-record 3200 megapixels in a single shot. It recorded a variety of objects, including a Romanesco broccoli, at that resolution, which is detailed enough to show a golf ball clearly from 24 kilometers away. The sensor array's focal plane is more than 60 centimeters wide, much larger than the 3.5-centimeter sensors on high-end consumer digital cameras, says the SLAC National Accelerator Laboratory, which built the array. When the telescope, funded by the U.S. National Science Foundation, begins operating next year, it will image the entire southern sky every few nights for 10 years, cataloguing billions of galaxies each time. The surveys will shed light on mysterious dark energy and dark matter, which make up most of the universe's mass. With its repeat coverage, the telescope will make the equivalent of an astronomical movie in order to discover objects that suddenly appear, move, or go bang. ### Biomedicine Corticosteroids given orally or intravenously should be the standard therapy for people with “severe and critical” COVID-19, the World Health Organization (WHO) said in new guidelines issued last week—but they should not be given to patients with mild cases. In June, a large U.K. trial named Recovery first showed that the steroid dexamethasone cut deaths among ventilated COVID-19 patients by 35% after 28 days of treatment. That result was confirmed by a WHO-sponsored metaanalysis published in JAMA on 2 September that included Recovery and six other studies testing dexamethasone, as well as two other corticosteroids—hydrocortisone and methylprednisolone. Many countries, including the United States, had already included corticosteroids in their national treatment guidelines. But WHO's recommendations will be important as a signal to low- and middle-income countries, says Martin Landray, one of Recovery's principal investigators. ### Public health COVID-19 virus particles drifting through a Chinese apartment building's plumbing may have infected some residents, a study has found, raising fears of yet another way that the disease could spread. The case echoes a 2003 outbreak of severe acute respiratory syndrome (SARS) that spread through the pipes of a Hong Kong apartment building. Such transmission is difficult to prove. But scientists suspect that aerosolized coronavirus may have spread from the bathroom of a Guangzhou family of five through a floor drain and into the building's wastewater pipes. Two middle-aged couples living in apartments above the family later contracted COVID-19. The study appeared last week in Annals of Internal Medicine . ### Conservation A plan to reforest a cross-continental strip of Africa to hold back expansion of the Sahara Desert and the semi-arid Sahel has made little progress—even though the project is halfway toward its planned completion date in 2030, a report says. Participating countries have planted only 4 million hectares of trees and other vegetation for the Great Green Wall, well short of the 100 million planned to stretch 7000 kilometers from Senegal to Djibouti, says the report by the Climatekos consulting firm, presented on 7 September at a meeting of the countries' ministers. Supporters predicted the project would also create jobs and capture carbon dioxide. Scientists have said creating grasslands may be more effective than planting trees to resist desertification, The Guardian reported. ### Philanthropy Rice University last week received a $100 million gift for materials science. It is the largest to date in that discipline recorded in a database of gifts for engineering maintained by The Chronicle of Philanthropy . The funding will be used to pair materials science with artificial intelligence to advance the design and manufacturing of new materials, for applications that include sustainable water systems, energy, and telecommunications. The donor was the Robert A. Welch Foundation, which supports chemistry research in Texas. ### Conservation Scientists hailed a move last week by the European Union to ban the use of lead ammunition near wetlands and waterways. The European Chemicals Agency has estimated that as many as 1.5 million aquatic birds die annually from lead poisoning because they swallow some of the 5000 tons of lead shot that land in European wetlands each year. Its persistence in the environment is also considered a human health hazard. The EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) committee approved the ban after years of controversy. The German delegation, which had abstained in a July vote on the issue, changed its stance to support the measure after a letter from 75 scientists and petitions signed by more than 50,000 people called for it to do so. The European Commission and the European Parliament are expected to formally approve the ban, allowing it to go into effect in 2022. REACH may debate a complete ban on lead ammunition and fishing weights later this year. ### Chemical weapons Alexei Navalny, a Russian opposition politician, was poisoned with a nerve agent “identified unequivocally in tests” as a Novichok, an exotic Sovietera chemical weapon, German Chancellor Angela Merkel said on 2 September. Navalny fell ill on 20 August after drinking a cup of tea at a Siberian airport. He was flown to Berlin and this week emerged from a coma. German military scientists at the Bundeswehr Institute of Pharmacology and Toxicology in Munich haven't released details of their tests, but they had clear targets to hunt for: Like other nerve agents, Novichoks bind to the enzymes acetylcholinesterase and butyrylcholinesterase, creating a telltale conjugate compound. Novichok agents came to wide public notice in 2018 after one was used in an assassination attempt against former Russian spy Sergei Skripal in the United Kingdom. The attack prompted nations to push for a crackdown on Novichok agents, and last year they were added to the list of toxic chemicals regulated under the Chemical Weapons Convention. ### COVID-19 In one of the largest surveys of Americans since COVID-19 lockdowns began, a majority reported having some symptoms of depression, up from one-quarter in a prepandemic survey. The prevalence of symptoms graded as moderate to severe tripled, to 27.8% of respondents. A research team compared results from two surveys used to screen for depression: one administered to more than 5000 people in 2017 and 2018 by the U.S. Centers for Disease Control and Prevention, the other given to 1400 people in early April by NORC at the University of Chicago. Prevalence of depression symptoms rose in all demographic groups and especially among individuals facing financial problems, job loss, or family deaths. The increases in self-reported symptoms are larger than those recorded in previous surveys after large-scale traumatic events in other countries, including outbreaks of the severe acute respiratory syndrome, H1N1, and Ebola, the authors write in the 2 September issue of JAMA Network Open . ### A U.S. vaccine leader's vow: Politics stays out “I would immediately resign if there is undue interference in this process.” So said Moncef Slaoui, scientific director of Operation Warp Speed, the U.S. effort to quickly develop a vaccine for COVID-19, in an interview with Science . To date, Warp Speed has invested more than $10 billion in eight vaccine candidates. Three are now in large-scale efficacy trials, and interim reviews of their data by independent safety and monitoring boards could reveal evidence of protection as early as October. Slaoui, an immunologist who formerly headed vaccine development at GlaxoSmithKline, answered questions from Science last week about how Warp Speed operates and addressed concerns that political pressure before the 3 November U.S. presidential election may lead to an emergency use authorization of a COVID-19 vaccine before it is proven safe and effective. (On 8 September, nine companies developing vaccines for the pandemic coronavirus pledged not to seek a premature authorization.) “It needs to be absolutely shielded from the politics,” Slaoui says. “Trust me, there will be no [authorization request] filed if it's not right. … The science is what is going to guide us. … And at the end of the day, the facts and the data will be made available to everyone who wants to look at them and will be transparent.” Slaoui defended Warp Speed's decision to not consider vaccines made of whole, inactivated viruses, a time-tested approach. China has three such vaccines in efficacy trials, but he worries they could cause serious side effects in people who receive them. Slaoui also said if it had been his choice, the United States would have participated in COVAX, a mechanism for countries to invest collectively in vaccines and share them; the Trump administration declined to join. The full interview—one of Slaoui's most detailed since taking the job in May—is at .

Putting nanoscale interactions under the microscope


Liquid-phase transmission electron microscopy (TEM) has recently been applied to materials chemistry to gain fundamental understanding of various reaction and phase transition dynamics at nanometer resolution. Researchers from the University of Illinois have developed a machine learning workflow to streamline the process of extracting physical and chemical parameters from TEM video data. The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and has been published in the journal ACS Central Science. Being able to see – and record – the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few.

Healthcare AI: How one hospital system is using technology to adapt to COVID-19


TechRepublic's Karen Roby spoke with Jay Roszhart of Memorial Health Center's Systems Ambulatory Group in Illinois about artificial intelligence (AI) in hospitals. The following is an edited transcript of their conversation. Karen Roby: The American Hospital Association estimates that hospitals have lost more than $200 billion because of the COVID-19 pandemic. Hospital leaders are always looking for ways to get patients back into doctors' offices and the hospitals in a safe and secure way. Talk a little bit just to start us off here about the population that you serve there in Illinois.

Human-centered redistricting automation in the age of AI


Redistricting—the constitutionally mandated, decennial redrawing of electoral district boundaries—can distort representative democracy. An adept map drawer can elicit a wide range of election outcomes just by regrouping voters (see the figure). When there are thousands of precincts, the number of possible partitions is astronomical, giving rise to enormous potential manipulation. Recent technological advances have enabled new computational redistricting algorithms, deployable on supercomputers, that can explore trillions of possible electoral maps without human intervention. This leaves us to wonder if Supreme Court Justice Elena Kagan was prescient when she lamented, “(t)he 2010 redistricting cycle produced some of the worst partisan gerrymanders on record. The technology will only get better, so the 2020 cycle will only get worse” ( Gill v. Whitford ). Given the irresistible urge of biased politicians to use computers to draw gerrymanders and the capability of computers to autonomously produce maps, perhaps we should just let the machines take over. The North Carolina Senate recently moved in this direction when it used a state lottery machine to choose from among 1000 computer-drawn maps. However, improving the process and, more importantly, the outcomes results not from developing technology but from our ability to understand its potential and to manage its (mis)use. It has taken many years to develop the computing hardware, derive the theoretical basis, and implement the algorithms that automate map creation (both generating enormous numbers of maps and uniformly sampling them) ([ 1 ][1]–[ 4 ][2]). Yet these innovations have been “easy” compared with the very difficult problem of ensuring fair political representation for a richly diverse society. Redistricting is a complex sociopolitical issue for which the role of science and the advances in computing are nonobvious. Accordingly, we must not allow a fascination with technological methods to obscure a fundamental truth: The most important decisions in devising an electoral map are grounded in philosophical or political judgments about which the technology is irrelevant. It is nonsensical to completely transform a debate over philosophical values into a mathematical exercise. As technology advances, computers are able to digest progressively larger quantities of data per time unit. Yet more computation is not equivalent to more fairness. More computation fuels an increased capacity for identifying patterns within data. But more computation has no relationship with the moral and ethical standards of an evolving and developing society. Neither computation nor even an equitable process guarantees a fair outcome. The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence. Ensuring representational fairness requires metacognition that integrates creative and benevolent compromises. Humans have the advantage over machines in metacognition. Machines have the advantage in producing large numbers of rote computations. Although machines produce information, humans must infuse values to make judgments about how this information should be used ([ 5 ][3]). ![Figure][4] Time to regroup Markedly different outcomes can emerge when six Republicans and six Democrats in these 12 geographic units are grouped into four districts. A 50-50 party split can be turned into a 3:1 advantage for either party. When redistricting a state with thousands of precincts, the potential for political manipulation is enormous. GRAPHIC: X. LIU/ SCIENCE Accordingly, machines can be tasked with the menial aspects of cognition—the meticulous exploration of the astronomical number of ways in which a state can be partitioned. This helps us classify and understand the range of possibilities and the interplay of competing interests. Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain. The ideal redistricting process begins with humans articulating the initial criteria for the construction of a fair electoral map (e.g., population equality, compactness measures, constraints on breaking political subdivisions, and representation thresholds). Here, the concerns of many different communities of interest should be solicited and considered. Note that this starting point already requires critical human interaction and considerable deliberation. Determining what data to use, and how, is not automatable (e.g., citizen voting age versus voting age population, relevant past elections, and how to forecast future vote choices). Partisan measures (e.g., mean-median difference, competitiveness, likely seat outcome, and efficiency gap) as well as vote prediction models, which are often contentious in court, should be transparently specified. Once we have settled on the inputs to the algorithm, the computational analysis produces a large sample of redistricting plans that satisfy these principles. Trade-offs usually arise (e.g., adhering to compactness rules might require splitting jagged cities). Humans must make value-laden judgments about these trade-offs, often through contentious debate. The process would then iterate. After some contemplation, we may decide, perhaps, on two, not three, majority-minority districts so that a particular town is kept together. These refined goals could then be specified for another computational analysis round with further deliberation to follow. Sometimes a Pareto improvement principle applies, with the algorithm assigned to ascertain whether, for example, city splits or minority representation can be maintained or improved even as one raises the overall level of compliance with other factors such as compactness. In such a process, computers assist by clarifying the feasibility of various trade-offs, but they do not supplant the human value judgments that are necessary for adjusting these plans to make them “humanly rational.” Neglecting the essential human role is to substitute machine irrationality for human bias. Automation in redistricting is not a substitute for human intelligence and effort; its role is to augment human capabilities by regulating nefarious intent with increased transparency, and by bolstering productivity by efficiently parsing and synthesizing data to improve the informational basis for human decision-making. Redistricting automation does not replace human labor; it improves it. The critical goal for AI in governance is to design successful processes for human-machine collaboration. This process must inhibit the ill effects from sole reliance on humans as well as overreliance on machines. Human-machine collaboration is key, and transparency is essential. The most promising institutional route in the near term for adopting this human-machine line-drawing process is through independent redistricting commissions (IRCs) that replace politicians with a balanced set of partisan citizen commissioners. IRCs are a relatively new concept and exist in only some states. They have varied designs. In eight states, a commission has primary responsibility for drawing the congressional plan. In six, they are only advisory to the legislature. In two states, they have no role unless the legislature fails to enact a plan. IRCs also vary in the number of commissioners, partisan affiliation, how the pool of applicants is created, and who selects the final members. The lack of a blueprint for an IRC allows each to set its own rules, paving the way for new approaches. Although no best practices have yet emerged for these new institutions, we can glean some lessons from past efforts about how to integrate technology into a partisan balanced deliberation process. For example, Mexico's process integrated algorithms but struggled with transparency, and the North Carolina Senate relied heavily on a randomness component. Both offer lessons and help us refine our understanding of how to keep bias from creeping into the process. Once these structural decisions are made, we must still contend with the fact that devising electoral maps is an intricate process, and IRCs generally lack the expertise that politicians and their staffs have cultivated from decades of experience. In addition, as the bitter partisanship of the 2011 Arizona citizen commission demonstrated, without a method to assess the fairness of proposals, IRCs can easily deadlock or devolve into lengthy litigation battles ([ 6 ][5]). New technological tools can aid IRCs in fulfilling their mandate by compensating for this experience deficiency as well as providing a way to benchmark fairness conceptualizations. To maintain public confidence in their processes, IRCs would need to specify the criteria that guide the computational algorithm and implement the iterative process in a transparent manner. Open deliberation is crucial. For instance, once the range of maps is known to produce, say, a seven-to-eight likely split in Democrat-to-Republican seats 35% of the time, an eight-to-seven likely Democrat-to-Republican split 40% of the time, and something outside these two choices 25% of the time, how does an IRC choose between these partisan splits? Do they favor a split that produces more compact districts? How do they weigh the interests of racial minorities versus partisan considerations? Regardless of what technology may be developed, in many states, the majority party of the state legislature assumes the primary role in creating a redistricting plan—and with rare exceptions, enjoys wide latitude in constructing district lines. There is neither a requirement nor an incentive for these self-interested actors to consent to a new process or to relinquish any of their constitutionally granted control over redistricting. All the same, technological innovation can still have benefits by ameliorating informational imbalance. Consider redistricting Ohio's 16 congressional seats. A computational analysis might reveal that, given some set of prearranged criteria (e.g., equal population across districts, compact shapes, a minority district, and keeping particular communities of interest together), the number of Republican congressional seats usually ends up being 9 out of 16, and almost never more than 11. Although the politicians could still then introduce a map with 12 Republican seats, they would now have to weigh the potential public backlash from presenting electoral districts that are believed, a priori, to be overtly and excessively partisan. In this way, the information that is made more broadly known through technological innovation induces a new pressure point on the system whereby reform might occur. Although politicians might not welcome the changes that technology brings, they cannot prevent the ushering in of a new informational era. States are constitutionally granted the right to enact maps as they wish, but their processes in the emerging digital age are more easily monitored and assessed. Whereas before, politicians exploited an information advantage, scientific advances can decrease this disparity and subject the process to increased scrutiny. Although science has the potential to loosen the grip that partisanship has held over the redistricting process, we must ensure that the science behind redistricting does not, itself, become partisanship's latest victim. Scientific research is never easy, but it is especially vulnerable in redistricting where the technical details are intricate and the outcomes are overtly political. We must be wary of consecrating research aimed at promoting a particular outcome or believing that a scientist's credentials absolve partisan tendencies. In redistricting, it may seem obvious to some that the majority party has abused its power, but validating research that supports that conclusion because of a bias toward such a preconceived outcome would not improve societal governance. Instead, use of faulty scientific tests as a basis for invalidating electoral maps allows bad actors to later overturn good maps with the same faulty tests, ultimately destroying our ability to legally distinguish good from bad. Validating maps using partisan preferences under the guise of science is more dangerous than partisanship itself. The courts must also contend with the inconvenient fact that although their judgments may rely on scientific research, scientific progress is necessarily and excruciatingly slow. This highlights a fundamental incompatibility between the precedential nature of the law and the unrelenting need for high-quality science to take time to ponder, digest, and deliberate. Because of the precedential nature of legal decision-making, enshrining underdeveloped ideas has harmful path-dependent effects. Hence, peer review by the relevant scientific community, although far from perfect, is clearly necessary. For redistricting, technical scientific communities as well as the social scientific and legal communities are all relevant and central, with none taking over the role of another. The relationship of technology with the goals of democracy must not be underappreciated—or overappreciated. Technological progress can never be stopped, but we must carefully manage its impact so that it leads to improved societal outcomes. The indispensable ingredient for success will be how humans design and oversee the processes we use for managing technological innovation. 1. [↵][6]1. W. K. T. Cho, 2. Y. Y. Liu , arXiv:2007.11461 (22 July 2020). 2. 1. W. K. T. Cho, 2. Y. Y. Liu , “A massively parallel evolutionary Markov chain Monte Carlo algorithm for sampling complicated multimodal state spaces,” paper presented at SC18: The International Conference for High Performance Computing, Networking, Storage and Analysis, Dallas, TX, 11 to 16 November 2018. 3. 1. Y. Y. Liu, 2. W. K. T. Cho, 3. S. Wang , Swarm Evol. Comput. 30, 78 (2016). [OpenUrl][7] 4. [↵][8]1. Y. Y. Liu, 2. W. K. T. Cho , Appl. Soft Comput. 90, 106129 (2020). [OpenUrl][9] 5. [↵][10]Conceptualizing “fairness” for a diverse society with overlapping and incongruous interests is complex ([ 7 ][11]). Although we primarily discuss algorithmic advances that enable automated drawing and uniform sampling of maps, other measurement issues remain. Stephanopoulos and McGhee ([ 8 ][12]) suggest that the efficiency gap, their measure of “wasted votes,” should be the same across parties. Chikina et al. ([ 9 ][13]) submit that a map should not be “carefully crafted” (i.e., producing different outcomes than geographically similar maps). Fifield et al. ([ 10 ][14]) and Herschlag et al. ([ 11 ][15]) present local ensemble sampling approaches to identify gerrymanders. Each of these is but one point in a massive evolving discussion. Along these lines, Warrington ([ 12 ][16]) explores various partisan gerrymandering measures. Saxon ([ 13 ][17]) examines the impact of various compactness measures; Cho and Rubinstein-Salzedo ([ 14 ][18]) discuss the concept of “carefully crafted” maps; and Cho and Liu ([ 15 ][19]) highlight difficulties involved in uniformly sampling maps. 6. [↵][20]1. B. E. Cain , Yale Law J. 121, 1808 (2012). 7. [↵][21]1. B. J. Gaines , in Rethinking Redistricting: A Discussion About the Future of Legislative Mapping in Illinois (Institute of Government and Public Affairs, University of Illinois, Urbana-Champaign, Chicago, and Springfield, 2011), pp. 6–10. 8. [↵][22]1. N. O. Stephanopoulos, 2. E. M. McGhee , Univ. Chic. Law Rev. 82, 831 (2015). [OpenUrl][23][Abstract/FREE Full Text][24] 9. [↵][25]1. M. Chikina, 2. A. Frieze, 3. W. Pegden , Proc. Natl. Acad. Sci. U.S.A. 114, 2860 (2017). 10. [↵][26]1. B. Fifield, 2. M. Higgins, 3. K. Imai, 4. A. Tarr , J. Comput. Graph. Stat. 10.1080/10618600.2020.1739532 (2020). 11. [↵][27]1. G. Herschlag et al ., Stat. Public Policy 10.1080/2330443X.2020.1796400 (2020). 12. [↵][28]1. G. S. Warrington , Elect. Law J. 18, 262 (2019). [OpenUrl][29] 13. [↵][30]1. J. Saxon , Elect. Law J. 28, 372 (2020). [OpenUrl][31] 14. [↵][32]1. W. K. T. Cho, 2. S. Rubinstein-Salzedo , Stat. Public Policy 6, 44 (2019). [OpenUrl][33] 15. [↵][34]1. W. K. T. Cho, 2. Y. Y. Liu , Physica A 506, 170 (2018). Acknowledgments: W.K.T.C. has been an expert witness for A. Philip Randolph Institute v. Householder, Agre et al. v. Wolf et al. , and The League of Women Voters of Pennsylvania et al. v. The Commonwealth of Pennsylvania et al. [1]: #ref-1 [2]: #ref-4 [3]: #ref-5 [4]: pending:yes [5]: #ref-6 [6]: #xref-ref-1-1 "View reference 1 in text" [7]: {openurl}?query=rft.jtitle%253DSwarm%2BEvol.%2BComput.%26rft.volume%253D30%26rft.spage%253D78%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [8]: #xref-ref-4-1 "View reference 4 in text" [9]: {openurl}?query=rft.jtitle%253DAppl.%2BSoft%2BComput.%26rft.volume%253D90%26rft.spage%253D106129%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [10]: #xref-ref-5-1 "View reference 5 in text" [11]: #ref-7 [12]: #ref-8 [13]: #ref-9 [14]: #ref-10 [15]: #ref-11 [16]: #ref-12 [17]: #ref-13 [18]: #ref-14 [19]: #ref-15 [20]: #xref-ref-6-1 "View reference 6 in text" [21]: #xref-ref-7-1 "View reference 7 in text" [22]: #xref-ref-8-1 "View reference 8 in text" [23]: {openurl}?query=rft.jtitle%253DUniv.%2BChic.%2BLaw%2BRev.%26rft_id%253Dinfo%253Adoi%252F10.1073%252Fpnas.1617540114%26rft_id%253Dinfo%253Apmid%252F28246331%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [24]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoicG5hcyI7czo1OiJyZXNpZCI7czoxMToiMTE0LzExLzI4NjAiO3M6NDoiYXRvbSI7czoyMzoiL3NjaS8zNjkvNjUwOC8xMTc5LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ== [25]: #xref-ref-9-1 "View reference 9 in text" [26]: #xref-ref-10-1 "View reference 10 in text" [27]: #xref-ref-11-1 "View reference 11 in text" [28]: #xref-ref-12-1 "View reference 12 in text" [29]: {openurl}?query=rft.jtitle%253DElect.%2BLaw%2BJ.%26rft.volume%253D28%26rft.spage%253D372%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [30]: #xref-ref-13-1 "View reference 13 in text" [31]: {openurl}?query=rft.jtitle%253DElect.%2BLaw%2BJ.%26rft.volume%253D6%26rft.spage%253D44%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [32]: #xref-ref-14-1 "View reference 14 in text" [33]: {openurl}?query=rft.jtitle%253DStat.%2BPublic%2BPolicy%26rft.volume%253D506%26rft.spage%253D170%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [34]: #xref-ref-15-1 "View reference 15 in text"

Machine learning peeks into nano-aquariums


In the nanoworld, tiny particles such as proteins appear to dance as they transform and assemble to perform various tasks while suspended in a liquid. Recently developed methods have made it possible to watch and record these otherwise-elusive tiny motions, and researchers now take a step forward by developing a machine learning workflow to streamline the process. The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and is published in the journal ACS Central Science. Being able to see - and record - the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact inside tiny aquariumlike sample containers, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few.