According to the latest audit from NASA's inspector general, the Space Launch System (SLS) rocket designed to take astronauts to the Moon is substantially over budget and far behind schedule. NASA's spending on the Artemis Moon Program is expected to reach $93 billion by 2025, including the $23.8 billion already spent on the SLS system through 2022. That sum represents "$6 billion in cost increases and over six years in schedule delays above NASA's original projections," says the report. One of the issues has been integrating older NASA technology with newer systems. "These increases are caused by interrelated issues such as assumptions that the use of heritage technologies… were expected to result in significant cost and schedule savings compared to developing new systems for the SLS," the audit states.
Former NASA astronaut Tom Jones speaks on what the launch of Artemis I could mean for the future of space exploration on'Your World.' China space officials said Monday that the program plans to place astronauts on the moon before 2030, as well as expand its space station. The deputy director of the Chinese Manned Space Agency confirmed that objectives at a press conference at the Jiuquan Satellite Launch Center, but did not provide a timeline. Deputy Director Lin Xiqiang told reporters that the country is first preparing for a "short stay on the lunar surface and human-robotic joint exploration." "We have a complete near-Earth human space station and human round-trip transportation system," he said.
President of Ukraine Volodymyr Zelenskyy held a meeting with U.S. Sen. Lindsey Graham on May 26, 2023, during the senator's third visit to Ukraine since Russia invaded the country. Sen. Lindsey Graham, R-S.C., is a wanted man in Russia for comments he made while visiting Ukrainian President Volodymyr Zelenskyy on Friday. Russia's Interior Ministry put out a warrant for Graham's arrest on Monday in response to an edited video released by Zelenskyy's office in which Graham praised U.S. support for Ukraine's defense and noted that Russians are dying as Ukraine fights for its freedom. In the video, Graham noted that "the Russians are dying" and described the U.S. military assistance to the country as "the best money we've ever spent." While Graham appeared to have made the remarks in different parts of the conversation, the short video by Ukraine's presidential office put them next to each other, causing outrage in Russia.
Fast-evolving AI technology could turbocharge misinformation in U.S. political campaigns, observers say. The 2024 presidential race is expected to be the first American election that will see the widespread use of advanced tools powered by artificial intelligence that have increasingly blurred the boundaries between fact and fiction. Campaigns on both sides of the political divide are likely to harness this technology -- which is cheap, easily accessible and whose advances have vastly outpaced regulatory responses -- for voter outreach and to churn out fundraising newsletters within seconds. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.
It was a tough week in tech. The top US health official warned about the risks of social media to young people; tech billionaire Elon Musk further trashed his reputation with the disastrous Twitter launch of a presidential campaign; and senior executives at OpenAI, makers of ChatGPT, called for the urgent regulation of "super intelligence". But to Doug Rushkoff – a leading digital age theorist, early cyberpunk and professor at City University of New York – the triple whammy of rough events represented some timely corrective justice for the tech barons of Silicon Valley. And more may be to come as new developments in tech come ever thicker and faster. "They're torturing themselves now, which is kind of fun to see. They're afraid that their little AIs are going to come for them. They're apocalyptic, and so existential, because they have no connection to real life and how things work. They're afraid the AIs are going to be as mean to them as they've been to us," Rushkoff told The Guardian in an interview.
An unknown object with flashing lights appeared to hover over Marine base in Twentynine Palms, California, in 2021. A Stanford University pathology professor said, "Aliens have been on Earth for a long time and are still here," and claims there are experts working on reverse engineering unknown crashed crafts. Dr. Garry Nolan made the bold statements during last week's SALT iConnections conference in Manhattan during a session called, "The Pentagon, Extraterrestrial Intelligence and Crashed UFOs." The host, Alex Klokus, said that's tough to believe and asked him to assign a probability to that statement that extraterrestrial life visited Earth. "I think it's an advanced form of intelligence that using some kind of intermediaries," Nolan said.
People in Texas sounded off on AI job displacement, with half of people who spoke to Fox News convinced that the tech will rob them of work. The Department of Education is worried that artificial intelligence systems could be used to surveil teachers once the systems are introduced into the classroom and warned in a new report that allowing that to happen would make teachers' jobs "nearly impossible." The department released a report this week on "Artificial Intelligence and the Future of Teaching and Learning," which also argued that AI should never be used to replace human teachers. The report is aimed at assessing the prospects of expanding AI into the classroom. While it says that AI could make teaching more efficient and help tailor lesson plans to individual students, it warned that AI might also expose teachers to increased surveillance once deployed.
Brain-computer interface company Neuralink announced on 25 May that it has received approval from the US Food and Drug Administration (FDA) for a clinical study in humans. Neuralink made the announcement on Twitter: "We are excited to share that we have received the FDA's approval to launch our first-in-human clinical study." The tweet said that the approval "represents an important first step that will one day allow our technology to help many people". The firm also said that the recruitment is not yet open for the trial, and it has yet to give any further details about what the trial will entail. Neuralink was formed in 2016 by Elon Musk and a group of scientists and engineers with the ultimate aim of making devices that interface with the human brain – both reading information from neurons as well as feeding information directly back into the brain.
As the artificial intelligence frenzy builds, a sudden consensus has formed. While there's a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane. Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI's most influential avatar of the moment, OpenAI CEO Sam Altman. "I think if this technology goes wrong, it can go quite wrong," he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. "We want to work with the government to prevent that from happening."