An executive order was just issued from the White House regarding "the Use of Trustworthy Artificial Intelligence in Government." Leaving aside the meritless presumption of the government's own trustworthiness and that it is the software that has trust issues, the order is almost entirely hot air. The EO is like others in that it is limited to what a president can peremptorily force federal agencies to do -- and that really isn't very much, practically speaking. This one "directs Federal agencies to be guided" by nine principles, which gives away the level of impact right there. Please, agencies -- be guided!
A new machine learning algorithm is poised to help urban transportation analysts relieve bottlenecks and chokepoints that routinely snarl city traffic. The tool, called TranSEC, was developed at the U.S. Department of Energy's Pacific Northwest National Laboratory to help urban traffic engineers get access to actionable information about traffic patterns in their cities. Currently, publicly available traffic information at the street level is sparse and incomplete. Traffic engineers generally have relied on isolated traffic counts, collision statistics and speed data to determine roadway conditions. The new tool uses traffic datasets collected from UBER drivers and other publicly available traffic sensor data to map street-level traffic flow over time.
In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees. Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics. Automating warehouse tasks can be challenging because many actions that come naturally to humans -- like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another -- are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots.
A new machine learning approach offers important insights into catalysis, a fundamental process that makes it possible to reduce the emission of toxic exhaust gases or produce essential materials like fabric. In a report published in Nature Communications, Hongliang Xin, associate professor of chemical engineering at Virginia Tech, and his team of researchers developed a Bayesian learning model of chemisorption, or Bayeschem for short, aiming to use artificial intelligence to unlock the nature of chemical bonding at catalyst surfaces. "It all comes down to how catalysts bind with molecules," said Xin. "The interaction has to be strong enough to break some chemical bonds at reasonably low temperatures, but not too strong that catalysts would be poisoned by reaction intermediates. This rule is known as the Sabatier principle in catalysis." Understanding how catalysts interact with different intermediates and determining how to control their bond strengths so that they are within that'goldilocks zone' is the key to designing efficient catalytic processes, Xin said. The research provides a tool for that purpose.
BEGIN ARTICLE PREVIEW: SAN JOSE, Calif., Dec. 2, 2020 /PRNewswire/ — MetricStream, the independent market leader in enterprise cloud applications for Governance, Risk, and Compliance (GRC), announced enhancements to its cloud-native M7 Integrated Risk Platform that is intelligent by design, and audit, compliance, enterprise risk, third-party risk and cyber security products leveraging the power of Amazon Web Services (AWS). Learn more. A recent IDC report states that enterprises have increased their cloud usage by 60% in 2020. The increased volume and velocity of risks, cybersecurity incidents, increasing number of compliance regulations and updates have made it critical for organizations to gain a more holistic view of governance, risk, compliance, and cyber programs. The reality of a
Artificial intelligence (AI) for translation is something Google and other companies have provided for individuals. It can be accessed on your phone. However, translation is still a much larger and complex issue than many people realize. The business community has many complex and unique needs that add to the challenge of accurate and reliable translation, and AI is showing increasing capability. One of the keys to business translation is the simple reality that each business sector has its own terms, phrases, and even idioms.
Robbie Freeman, vice president of clinical innovation at New York's Mount Sinai Health System began his career working at the bedside, so he has an intimate appreciation of the real-world value of patient safety projects – and importance of ensuring key data is gathered and made actionable with optimal workflows. "I'm a registered nurse, and I think working with patients and spending a lot of time on data entry is what kind of led us to this real focus on clinical workflows and delivering additional value," said Freeman, speaking Wednesday at the HIMSS Machine Learning & AI for Healthcare Digital Summit about some of Mount Sinai's recent automation initiatives. In an earlier, pre-digital age, many of the flow sheets and assessments collected during a nursing assessment, or other clinical information entered into the chart, might not have been "used or even necessarily looked at," he said. But in recent years, "they've become very valuable in the world of predictive analytics. There's a lot of information in those flow sheets that we can tap into for these models."
Intel has released new performance benchmarks for the Loihi neuromorphic computing processor, revealing improvements in power consumption and efficiency. During Intel's virtual Lab Day, on Thursday, the tech giant revealed Loihi chip improvements in voice command recognition, gesture recognition in artificial intelligence (AI) applications, image retrieval, search functions, and robotics. Neuromorphic computing aims to use computer science to propel rule and classical logic-based AI builds into more flexible systems that emulate human cognition, including contextual interpretation, sensory applications, and autonomous adaptation. Intel says that neuromorphic computing focuses on emulating the human brain and implementing stable probabilistic computing, which creates "algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world" -- just like humans are capable of. However, speaking to attendees of the virtual event, Rich Uhlig, VP and Director of Intel Labs added a caveat: progress in neuromorphic computing has "come at the cost of ever-increasing power consumption, [...] posing challenges for AI and the democratization of AI" -- a bottleneck hurdle that the company wants to overcome.
While'data' might be the new oil, the'dataset' is the refined gasoline that powers every Machine Learning (ML) and AI operation. These datasets are used to boost signal, accuracy, precision, profit/loss, Sortino or Sharpe ratios in the financial markets and biosciences industries. The following is a transcript of a recent AMA hosted by ABT Crypto Academy on Telegram with the founder of Vectorspace AI, Kasian Franks. We're extremely privileged to be joined by Kasian Franks the CEO of Vectorspace, it's only right we start off with a brief introduction -- can you tell us what exactly Vectorspace is and how the idea came about? We got our start in Life Sciences, now most refer to it as Biosciences at Genentech and Lawrence Berkeley National Lab https://www.lbl.gov There we were tasked with creating a system to identify hidden relationships between genes, drugs and diseases connected to breast cancer, chromosomal radiation damage and extending human lifespan for the purpose of deep space travel. We wrote a paper with Micheal I. Jordan, teacher of Andrew Ng, who developed the first AI for Google and China's Baidu. "The statistical modeling of biomedical corpora could yield integrated, coarse-to-fine views of biological phenomena that complement discoveries made f…" Human genes are like stocks or cryptos. Our technology is based on datasets.
In a project for the Defense Department's Defense Innovation Unit (DIU), computer scientists have turned to artificial intelligence and aerial imagery to construct a detailed damage assessment solution. The tool can be used remotely and automatically to determine the amount of damage to buildings and structures from a natural disaster or catastrophe. The prototype, known as the xView II model, was tested this fall, with the goal of rolling out a more finalized operational version next year. In the last few years, the U.S. military has seen an enormous amount of weather-related damage to some of its facilities, including the destruction at Tyndall Air Force Base, Florida, from Hurricane Michael in 2018; extensive water damage at Camp LeJeune, North Carolina, from Hurricane Florence's torrential rains in 2018; and flooding of the Missouri River and area creeks that impacted one-third of Offutt Air Force Base, Nebraska, in 2019. Meanwhile, this fall, California's wildfires raged over 4 million acres causing irreparable damage, while repeated hurricanes barraged the Gulf Coast.