working
New Report Finds Efforts to Slow Climate Change Are Working--Just Not Fast Enough
By virtually every key metric, efforts to fight climate change are going too slowly, according to findings by a coalition of climate groups. In some cases, things are moving in the wrong direction. An eroded iceberg is seen is seen floating near Horseshoe Island, Antarctica. In the 10 years since the signing of the Paris Agreement, the backbone of international climate action, humanity has made impressive progress. Renewable energy is increasingly cheap and reliable, while electric vehicles are becoming better every year.
- Antarctica (0.24)
- Asia > China (0.05)
- South America > Brazil (0.04)
- (9 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Law (1.00)
- (5 more...)
MAGAnomics Isn't Working
A dismal jobs report affirms earlier warnings about the economic impact of Donald Trump's tariffs, immigration restrictions, and -led firings. At the start of last week, I watched a big cargo ship stacked high with containers enter New York Harbor. As the vessel approached the Verrazzano-Narrows Bridge, it appeared to stop, but that was an illusion created by its size and the slowness of its advance. Fifteen minutes later, it had managed to push its way under the bridge. Throughout the years, I've often compared the U.S. economy to a giant freighter that is tough to deflect from its course, and, since Donald Trump was elected for a second time, this metaphor has become particularly apt.
- Asia > South Korea (0.29)
- North America > United States > Texas (0.04)
- North America > United States > New York > Bronx County > New York City (0.04)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
The Creator of the Smash Indie Game 'Animal Well' Is Already Working on His Next Project
Billy Basso was glued to his computer. It was launch day for the Chicago developer's debut solo game, a surreal Metroidvania called Animal Well, and he couldn't stop reading reviews online and watching people play the game. He'd pulled off the impossible: breaking through a turbulent industry to create a hit game that would grow to be a critical and commercial success. He just didn't realize how big of one it would be quite yet. Most successful video games are made by teams of people that vary in size from a half dozen to somewhere in the hundreds.
- North America > United States > Illinois > Cook County > Chicago (0.27)
- North America > United States > California > San Francisco County > San Francisco (0.07)
DOGE Is Working on Software That Automates the Firing of Government Workers
Engineers for Elon Musk's so-called Department of Government Efficiency, or DOGE, are working on new software that could assist mass firings of federal workers across government, sources tell WIRED. The software, called AutoRIF, which stands for Automated Reduction in Force, was first developed by the Department of Defense more than two decades ago. Since then, it's been updated several times and used by a variety of agencies to expedite reductions in workforce. Screenshots of internal databases reviewed by WIRED show that DOGE operatives have accessed AutoRIF and appear to be editing its code. There is a repository in the Office of Personnel Management's (OPM) enterprise GitHub system titled "autorif" in a space created specifically for the director's office--where Musk associates have taken charge--soon after Trump took office.
Working with Dimensionality Reduction part2(Machine Learning)
Abstract: The weighted Euclidean distance between two vectors is a Euclidean distance where the contribution of each dimension is scaled by a given non-negative weight. The Johnson-Lindenstrauss (JL) lemma can be easily adapted to the weighted Euclidean distance if weights are known at construction time. Given a set of n vectors with dimension d, it suffices to scale each dimension of the input vectors according to the weights, and then apply any standard JL reduction: the weighted Euclidean distance between pairs of vectors is preserved within a multiplicative factor ε with high probability. However, this is not the case when weights are provided after the dimensionality reduction. In this paper, we show that by applying a linear map from real vectors to a complex vector space, it is possible to update the compressed vectors so that the weighted Euclidean distances between pairs of points can be computed within a multiplicative factor ε, even when weights are provided after the dimensionality reduction.
Working with Projected Gradient Descent part2(Machine Learning)
Abstract: The unit-modulus least squares (UMLS) problem has a wide spectrum of applications in signal processing, e.g., phase-only beamforming, phase retrieval, radar code design, and sensor network localization. Scalable first-order methods such as projected gradient descent (PGD) have recently been studied as a simple yet efficient approach to solving the UMLS problem. Existing results on the convergence of PGD for UMLS often focus on global convergence to stationary points. As a non-convex problem, only a sublinear convergence rate has been established. However, these results do not explain the fast convergence of PGD frequently observed in practice.
Working with Neural Circuits part3(Artificial Intelligence)
Abstract: The purpose of this paper is trifold -- to serve as an instructive resource and a reference catalog for biologically plausible modeling with i) conductance-based models, coupled with ii) strength-varying slow synapse models, culminating in iii) two canonical pair-wise rhythm-generating networks. We document the properties of basic network components: cell models and synaptic models, which are prerequisites for proper network assembly. Using the slow-fast decomposition we present a detailed analysis of the cellular dynamics including a discussion of the most relevant bifurcations. Several approaches to model synaptic coupling are also discussed, and a new logistic model of slow synapses is introduced. Finally, we describe and examine two types of bicellular rhythm-generating networks: i) half-center oscillators ii) excitatory-inhibitory pairs and elucidate a key principle -- the network hysteresis underlying the stable onset of emergent slow bursting in these neural building blocks.
Working with Elimination Algorithms part1(Machine Learning)
Abstract: Node elimination is a numerical approach to obtain cubature rules for the approximation of multivariate integrals. Beginning with a known cubature rule, nodes are selected for elimination, and a new, more efficient rule is constructed by iteratively solving the moment equations. This paper introduces a new criterion for selecting which nodes to eliminate that is based on a linearization of the moment equation. In addition, a penalized iterative solver is introduced, that ensures that weights are positive and nodes are inside the integration domain. A strategy for constructing an initial quadrature rule for various polytopes in several space dimensions is described.
Working with the concept of Self-Imitation Learning part1(Machine Learning)
Abstract: Imitation learning (IL) enables robots to acquire skills quickly by transferring expert knowledge, which is widely adopted in reinforcement learning (RL) to initialize exploration. However, in long-horizon motion planning tasks, a challenging problem in deploying IL and RL methods is how to generate and collect massive, broadly distributed data such that these methods can generalize effectively. In this work, we solve this problem using our proposed approach called {self-imitation learning by planning (SILP)}, where demonstration data are collected automatically by planning on the visited states from the current policy. SILP is inspired by the observation that successfully visited states in the early reinforcement learning stage are collision-free nodes in the graph-search based motion planner, so we can plan and relabel robot's own trials as demonstrations for policy learning. Due to these self-generated demonstrations, we relieve the human operator from the laborious data preparation process required by IL and RL methods in solving complex motion planning tasks.
Working with the concept of Self-Imitation Learning part2(Machine Learning)
Abstract: The application of reinforcement learning (RL) in robotic control is still limited in the environments with sparse and delayed rewards. In this paper, we propose a practical self-imitation learning method named Self-Imitation Learning with Constant Reward (SILCR). Instead of requiring hand-defined immediate rewards from environments, our method assigns the immediate rewards at each timestep with constant values according to their final episodic rewards. In this way, even if the dense rewards from environments are unavailable, every action taken by the agents would be guided properly. We demonstrate the effectiveness of our method in some challenging continuous robotics control tasks in MuJoCo simulation and the results show that our method significantly outperforms the alternative methods in tasks with sparse and delayed rewards.