BEIJING, BM, ($1 6.40 yuan) – The Chinese state media Global Times published a material in which it reveals that the pilots of the People's Liberation Army are pretending to use simulators with artificial intelligence. According to Liu Xuanzun, the process of training with artificial intelligence is two-way – the pilot learns to improve his skills, while artificial intelligence learns from the pilot's actions. There is already evidence that artificial intelligence developed in China is taking a serious advantage over pilots. In a recent "battle" between Chinese pilot Fang Guoyu, artificial intelligence won for the second time, according to the publication. China says the future of their fifth-generation J-20 fighter and next-generation fighters is based on the use of artificial intelligence.
Artificial intelligence is also on the advance in IT security. According to a survey of 300 managers, 96 percent reported preparations in their companies for AI-supported IT attacks. In doing so, they partly rely on the help of "defensive AI". The survey was carried out with the assistance of the AI cybersecurity provider Darktrace. A survey of around 200 IT managers in medium-sized companies came to a more differentiated result.
Brussels – NATO leaders warned Monday that China's military ambitions pose "systemic challenges" to their alliance, and agreed to enhance ties with Japan and other Asia-Pacific nations to back the rules-based international order. The tough line against Beijing, taken in a communique released after the NATO summit, came as U.S. President Joe Biden rallies allies to counter what he calls autocracies like China and Russia that are challenging an open international order. "China's stated ambitions and assertive behavior present systemic challenges to the rules-based international order and to areas relevant to alliance security," said the communique from the 30-member organization that brings together North American and European countries. The leaders also expressed concerns over what they called China's coercive policies, while pointing out the country's rapid expansion of its nuclear arsenal and criticizing the opaqueness of its military modernization. The communique, meanwhile, named Australia, Japan, New Zealand and South Korea as countries with which NATO plans to strengthen its "political dialogue and practical cooperation" in a bid to promote cooperative security and support the rules-based international order.
Archer Materials has announced signing a deal with the Australian Missile Corporation (AMC) that will see the former work on the development of sovereign defence capabilities. The Australian Missile Corporation is a subsidiary of NIOA, a Defence prime contractor and the largest Australian-owned supplier of weapons and munitions to Defence. Archer is developing quantum computing processor chip technology, and said it currently possesses advanced semiconductor manufacturing capabilities that will be of benefit to a future sovereign guided weapons enterprise. The non-binding letter of intent Archer has signed with AMC will be focused on its 12CQ quantum computing chip technology. The agreement forms part of Prime Minister Scott Morrison's AU$1 billion Sovereign Guided Weapons Enterprise initiative which he said will support missile and guided weapons manufacturing for use across the Australian Defence Force. The initiative will receive a total of AU$270 billion over the next decade to strengthen Australia's defence forces through high-tech submarines, new fighter jets, hypersonic weapons, and advanced munitions.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.
If you follow my blogs, you know that I've been focusing a fair amount of attention on artificial intelligence, and how it has raised reasons for both optimism and extreme ethical pause. In this one, I want to discuss how there is potential for a new conflict not dissimilar to the Cold War with the development and proliferation of nuclear energy; but this time AI will take centre stage of the theatre. Very much akin to nuclear expansion, artificial intelligence comes with its own bag of pros and cons. Indubitably, nuclear energy has been harnessed for the commonwealth of mankind. Water Desalination -- Reducing the saline content of seawater is extremely costly and inefficient.
Traditional cybersecurity isn't necessarily bad at detecting attacks, the trouble is it often does so after they have occurred. A better approach is to spot potential attacks and block them before they can do any damage. One possible way of doing this is via'deep learning' allowing technology to identify the difference between good and bad. We spoke with Brooks Wallace, cybersecurity sales leader at Deep Instinct to find out more about this innovative solution. BW: If you look at cybersecurity, there's always been this holy grail of prevention.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.
A new AI-based chatbot tool used to help identity crime victims seek after-hours help was also designed with future B2B applications in mind, including helping employees report a cyberattack when the IT or security team is unavailable. This chatbot helper is a new service currently undergoing beta testing by the Identity Theft Resource Center (ITRC), leveraging technology developed by its partner SAS Institute. Thanks to ViViAN, individuals do not have to wait until normal ITRC business hours in order to report an incident; rather, they can lodge their complaints with the chatbot and receive reassurance and guidance on the immediate next steps they should take. All communications with ViViAN are then later followed up by a live agent when one becomes available. But at least this way, victims are able to act swiftly when their data is at stake and time is of the essence.