gage
Secret Service changes the agency has made post-Trump Butler assassination attempt
Former Secret Service special agent Richard Staropoli weighs in on new details about President Donald Trump's second assassination attempt on'The Story.' The Secret Service has ushered in a series of changes to beef up its security measures in the aftermath of the July 2024 assassination attempt against President Donald Trump in Butler, Pennsylvania – including suspending six of its agents due to their response to the crisis. Secret Service Deputy Director Matt Quinn disclosed the suspensions Wednesday in an interview with CBS News, and said the consequences ranged from 10 days to 42 days of unpaid leave. Additionally, he said the agents would return to restricted roles following the suspension, and said the agency was "laser focused on fixing the root cause of the problem." "Secret Service is totally accountable for Butler," Quinn told CBS. "Butler was an operational failure and we are focused today on ensuring that it never happens again."
- North America > United States > Pennsylvania (0.28)
- North America > United States > Maryland > Prince George's County > Laurel (0.05)
More non-fiction authors are suing OpenAI and Microsoft
In November, a group of non-fiction authors filed a lawsuit accusing OpenAI and Microsoft of using other people's intellectual property without permission to train the former's generative AI technology. Now, more non-fiction writers are suing the companies for using their work to train OpenAI's GPT large language models (LLM). Professional writers "have limited capital to fund their research" and "typically self-fund their projects," they said in their complaint. The plaintiffs added that the companies could've explored alternative financing options, such as profit sharing, but have "decided to steal" instead. They're seeking up to $150,000 per infringed work in damages, as well as a permanent injunction "to prevent these harms from recurring."
Delving inside the mind: Incredible graphics reveal what each section of your BRAIN does - with more than 70,000 thoughts processed every single day
Published in 1909, Korbinian Brodmann's groundbreaking analysis of the brain can still be found in neurology textbooks and on classroom posters to this day. Using a specialized microscope, Brodmann painstakingly analyzed the entire surface of the Cerebral Cortex on cellular structure alone. After a decade of effort, Brodmann produced the most detailed map of the Cerebral Cortex yet produced, assigning each region a different number. Over time these areas have been widely used to link brain regions with specific functions, such as area four: the primary motor cortex. This region of the Cerebral Cortex is believed to control motor movements such as moving the hands and face as well as breathing and voluntary blinking. Brodmann's areas have also been mapped to functions such as processing numbers, planning, and processing emotions. Of course, the complexity doesn't stop there as scientists now believe the Cortex has at least 180 distinct regions important for language, perception, consciousness, and attention.
- North America > United States > Vermont (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Lexus Dreamed Up Two Different Autonomous Futures That Will Help Us Relax
When it comes to autonomous vehicles, one of the biggest complaints is that they'll take away the reactive, human elements of driving that see us making split-second decisions and not, y'know, immediately crashing into walls when the car is unleashed. It's hard to relax and let some code control your vehicle. But Lexus has paired with two TED Fellows to design two different ways we might start relaxing into our autonomous cars. This browser does not support the video element. The TED Fellows program provides a way for researchers across countless disciplines to receive support, build a community, and come up with some really cool ideas about how we can power our future.
GLUCOSE: GeneraLized and COntextualized Story Explanations
Mostafazadeh, Nasrin, Kalyanpur, Aditya, Moon, Lori, Buchanan, David, Berkowitz, Lauren, Biran, Or, Chu-Carroll, Jennifer
When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions: First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected 440K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Oceania > Australia > Victoria > Melbourne (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (10 more...)
High Temporal Resolution Rainfall Runoff Modelling Using Long-Short-Term-Memory (LSTM) Networks
Li, Wei, Kiaghadi, Amin, Dawson, Clint N.
Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant (shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM network's capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.
- North America > United States > Texas > Harris County > Houston (0.68)
- North America > United States > Texas > Travis County > Austin (0.14)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (6 more...)
How Your Brain (and a Computer) Learn the 'Rules of the Game'
In 1848, the 25-year-old Phineas Gage was working on a railroad in Vermont, packing explosive powder into a hole with an iron tamper. Unexpectedly, the powder exploded, sending the tamper backwards through Gage's skull and brain. That he survived is a miracle, but astonishingly he even seemed capable of functioning effectively, maintaining normal memory, speech, and motor skills. Those that knew him, however, thought he was anything but the same, with friends remarking he was "no longer Gage." "…his equilibrium, or balance, so to speak, between his intellectual faculties and animal propensities seems to have been destroyed. He is fitful, irreverent, indulging in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires."
- North America > United States > Vermont (0.25)
- North America > United States > Wisconsin (0.05)
- Health & Medicine > Therapeutic Area > Neurology (0.36)
- Leisure & Entertainment > Games > Computer Games (0.32)
Really Bad Chess makes chess fun even if you're really bad
Last summer, game designer Zach Gage -- best known for mobile titles like Spelltower and Ridiculous FIshing -- went for a walk with a friend, and eventually the discussion turned to chess. His friend had recently taken up the game and become quite proficient, whereas Gage was never able to get into it, despite multiple attempts. This gulf in skill and experience meant that the two couldn't play together in a meaningful way; if they tried, Gage would get crushed, and the match wouldn't be much fun. So he decided to fix the problem. Today sees the launch of Really Bad Chess on iOS.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.85)