Goto

Collaborating Authors

 morgan


logLTN: Differentiable Fuzzy Logic in the Logarithm Space

Badreddine, Samy, Serafini, Luciano, Spranger, Michael

arXiv.org Artificial Intelligence

The AI community is increasingly focused on merging logic with deep learning to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with symbolic knowledge. A significant trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one of the main representatives in this category, known for its simplicity, efficiency, and versatility. However, it has been previously shown that not all fuzzy operators perform equally when applied in a differentiable setting. Researchers have proposed several configurations of operators, trading off between effectiveness, numerical stability, and generalization to different formulas. This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space. Our goal is to develop a configuration that is more effective than previous proposals, able to handle any formula, and numerically stable. To achieve this, we propose semantics that are best suited for the logarithm space and introduce novel simplifications and improvements that are crucial for optimization via gradient-descent. We use LTN as the framework for our experiments, but the conclusions of our work apply to any similar NeSy framework. Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art and that each of our modifications is essential in achieving these results.


Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

Lawrence, Nick, Shen, Mingren, Yin, Ruiqi, Feng, Cloris, Morgan, Dane

arXiv.org Artificial Intelligence

The use of accurate scanning transmission electron microscopy (STEM) image simulation methods require large computation times that can make their use infeasible for the simulation of many images. Other simulation methods based on linear imaging models, such as the convolution method, are much faster but are too inaccurate to be used in application. In this paper, we explore deep learning models that attempt to translate a STEM image produced by the convolution method to a prediction of the high accuracy multislice image. We then compare our results to those of regression methods. We find that using the deep learning model Generative Adversarial Network (GAN) provides us with the best results and performs at a similar accuracy level to previous regression models on the same dataset.


'The Five' on Disney controversy, COVID boosters

FOX News

'The Five' panel weighs in on Disney's protest over the new Florida law. This is a rush transcript of "The Five" on March 30, 2022. This copy may not be in its final form and may be updated. It's five o'clock in New York City, and this is THE FIVE. WATTERS: Yet another big American company going woke. Disney caving to the liberal mob and pledging to help repeal the new parental rights bill that just signed into law by Florida Governor Ron DeSantis. Which Democrats and the media have completely lied about and called them bigoted? RON DESANTIS (R-FL): For a company like Disney to say that the bill should have never passed, first of all, Tucker, they weren't saying anything when this is going through the House. They only started doing this because the mob, the woke mob came after them. But put that aside, for them to say that them as a California-based company are going to work to take those California values and overturned a law that was duly enacted, and as you said, supported by a strong majority of Floridians, they don't run the state. We have done a bill that prohibited talking about the abuse of Uyghurs in China, Disney would support that legislation. Now they know it's hello, everyone, or hello friends. When we brought the fireworks back to the magic kingdom, we no longer say ladies and gentlemen, boys and girls, we say dreamers of all ages. PIERS MORGAN, FOX NEWS CO-HOST: By the way, sorry, but when you can't even say ladies and gentlemen anymore or boys and girls. What in the world is happening in the world? WATTERS: Would you like to go ahead. I was going to ask Greg. WATTERS: Piers, with your, you know, very, very rude habits. GUTFELD: I think I just had a stroke. MORGAN: You deserve a pretty outburst of rage. My heart -- (CROSSTALK) MORGAN: Honestly, it is so pathetic, isn't it? You can't go to a theme park and you can't hear the word ladies and gentlemen, boys and girls.


Computers Use Machine Learning to Detect Radiation Damage Better Than Humans Do

#artificialintelligence

Developing safe nuclear reactor materials depends on a critical, though tedious and time-consuming, task: sifting through electron microscopy images of materials exposed to radiation to identify radioactive damage. This monotonous task has traditionally fallen to image-processing algorithms programmed to identify patterns in images that look like Jackson Pollock paintings. Researchers at the University of Wisconsin-Madison and Oak Ridge National Laboratory may have found a faster and more accurate alternative: letting computers learn how to identify the damage by themselves. "Human detection and identification is error-prone, inconsistent and inefficient," said Dane Morgan, materials science and engineering professor. "Newer imaging technologies are outstripping human capabilities to analyze the data we can produce."


Eagle-eyed machine learning algorithm outdoes human experts

#artificialintelligence

Artificial intelligence is now so smart that silicon brains frequently outthink people. Computers operate self-driving cars, pick friends' faces out of photos on Facebook, and are learning to take on jobs typically entrusted only to human experts. Researchers from the University of Wisconsin–Madison and Oak Ridge National Laboratory have trained computers to quickly and consistently detect and analyze microscopic radiation damage to materials under consideration for nuclear reactors. And the computers bested humans in this arduous task. "Machine learning has great potential to transform the current, human-involved approach of image analysis in microscopy," says Wei Li, who earned his master's degree in materials science and engineering this year from UW–Madison.


J.P. Morgan's data scientists are about to launch something wild

#artificialintelligence

Something is underway at J.P. Morgan. In the coming weeks or months, the bank will be launching a new website to allow anyone to compete in the creation of accurate predictions based on large sets of data. To be known as RoarData, the site isn't live yet but the bank plans to launch it in mid-2018 (after delaying the launch from early 2018). J.P. Morgan has been busy hiring for the Roar team, which sits within the data science unit of its corporate and investment bank (CIB). Applicants need to be, "exceptional coders," who are familiar with key machine learning tools like TensorFlow and chainer.


JP Morgan is unleashing artificial intelligence on a business that moves $5 trillion for corporations every day

#artificialintelligence

J.P. Morgan wouldn't disclose what it spent on this project but has said that 40 percent of its $10.8 billion annual technology budget is devoted to new efforts, including AI, robotic process automation and blockchain. "Based on your behavior each time, it will start to learn what you ask for," Tiede said in an interview. "We think there's a huge opportunity to suggest creative and insightful recommendations to clients. When you log in, it can say, 'Looks like you have sent 100 US dollar wires to Singapore. Do you know you could send a foreign-exchange ACH payment instead? Click here to sign up.'"

  Country: Asia > Singapore (0.28)
  Industry: Banking & Finance (1.00)

How Bethesda plans to pull players back to 'Prey'

Engadget

Last year's Prey was a creepy shooter and role-playing game set on a spaceship riddled with black, shimmering aliens. The so-called'immersive sim' was praised for its science fiction story, which let you shape the main character and the fate of the hostile research station. The gameplay, though, was seen by many as a retread of BioShock, System Shock and other genre classics. Despite its wild Neuromod abilities, which let you become an expert hacker, fighter or shape-shifting alien, the rebooted Prey failed to catch the public's attention. The title is far from finished, though.


BAML hires top machine-learning quant from J.P. Morgan

#artificialintelligence

Bank of America Merrill Lynch hired Rajesh Krishnamachari, formerly a senior quantitative strategist and researcher at J.P. Morgan, as the head of data science for equities in New York last month. BofA's new equities-focused data-science team is using machine learning and artificial intelligence to get insights from proprietary data and develop new products that have an impact on the top and bottom line of the business. A Bank of America spokeswoman confirmed his employment but declined to comment further. Krishnamachari joined J.P. Morgan's equity derivatives quantitative research team in 2014. Primarily using Python, Java and the XGBoost software library, he designed and back-tested systematic options, VIX and equities trading strategies, as well as an ultra-high-frequency execution algorithm for trading VIX futures.