Even if your grad has finally made it through college, that doesn't mean they're ready to step out into the real world with no help. They'll not only have to find a job but also might need a little help living on their own, taking on more responsibility and being more of an adult in general. That includes having better security practices, dressing smarter and, if they're lucky enough to find their own apartment, making their new place feel more like home. Here are a few gadgets that could help ease the transition into "adulthood." It's not a terribly sexy subject, but keeping your online data safe should be a priority for everyone, including your new grad.
Acronis has announced its official Artificial Intelligence partnership with Associazione Sportiva Roma, an Italian professional football club. Under the agreement, Acronis will be providing AI and Machine Learning (ML) solutions to process football data to optimise game and business operations, and cyber protection solutions for mission-critical workloads. AS Roma's chief operating officer, Francesco Calvo, shared his thoughts on the importance of the partnership. "The sport of football is constantly evolving, and we have entered an age where we are more dependent than ever before on data to make game-winning calls. However, the more important data is the more at risk it becomes. "This is why we are partnering up with Acronis to not only help us analyse and improve the quality of the data we collect, but also to protect it.
U.S. national security officials have approved an investor group's purchase of gay-dating app Grindr that is being sold by a Chinese company after the Trump administration raised concerns about the potential theft of Americans' personal data. In investor documents released Friday, China's Beijing Kunlun Tech Co. said that the buyer has secured approval from the Committee on Foreign Investment in the United States, a panel of national security experts who ordered that Beijing Kunlun Tech sell its ownership last year.
The Australian arm of IBM has made its financial results for 2019 available, reporting net profit of AU$109 million, slightly down from the AU$119 million made a year prior. Total revenue for the year, however, decreased by over AU$300 million to just shy of AU$2.6 billion, the cost of sales was reduced by around the same amount from AU$2.25 billion to AU$1.93 billion, while the company's selling, general, and administrative expenses remained steady at AU$453 million. All of IBM Australia's segments submitted weaker revenue across the board year on year. This consisted of a AU$1.04 billion contribution from its global technology services, down from AU$1.16 billion; AU$622 million from its cloud and cognitive software, compared to AU$726.7 million; global business services chipped in AU$33 million less with AU$503 million in revenue; intercompany services and sales provided AU$299 million, AU$15 million less year on year; while its systems sales earned AU$111 million, a AU$61 million drop from the year prior. Tax-wise, IBM Australia decreased its income tax payments from AU$52 million last year to AU$41.4 million for the year ended December 31, 2019.
The infrastructure of modern society is controlled by software systems. These systems are vulnerable to attacks; several such attacks, launched by "recreation hackers," have already led to severe disruption. However, a concerted and planned attack whose goal is to reap harm could lead to catastrophic results (for example, by disabling the computers that control the electrical power grid for a sustained period of time). The survivability of such information systems in the face of attacks is therefore an area of extreme importance to society. This article is set in the context of self-adaptive survivable systems: software that judges the trustworthiness of the computational resources in its environment and that chooses how to achieve its goals in light of this trust model.
AI is not a solution; it is an enhancement. Many IT decision leaders mistakenly consider AI a silver bullet that can solve all their current IT security challenges without fully understanding how to use the technology and what its limitations are. We have seen AI reduce the complexity of the security analyst's job by enabling automation, triggering the delivery of cyber incident context, and prioritizing fixes. Yet, security vendors continue to tout further, exasperated AI-enabled capabilities of their solution without being able to point to AI's specific outcomes.
Seoul – In a cramped office in eastern Seoul, Hwang Seungwon points a remote control toward a huge NASA-like overhead screen stretching across one of the walls. With each flick of the control, a colorful array of pie charts, graphs and maps reveals the search habits of thousands of South Korean senior citizens being monitored by voice-enabled "smart" speakers, an experimental remote care service the company says is increasingly needed during the coronavirus crisis. "We closely monitor for signs of danger, whether they are more frequently using search words that indicate rising states of loneliness or insecurity," said Hwang, director of a social enterprise established by SK Telecom to handle the service. Trigger words lead to a recommendation for a visit by local public health officials. As South Korea's government pushes to allow businesses to access vast amounts of personal information and to ease restrictions holding back telemedicine, tech firms could potentially find much bigger markets for their artificial intelligence and other emerging technologies.
The 122-page publication, called "Explaining decisions made with AI" and written in conjunction with The Alan Turing Institute, the U.K.'s national center for AI, hopes to ensure organizations can be transparent about how AI-generated decisions are made, as well as ensure clear accountability about who can be held responsible for them so that affected individuals can ask for an explanation. It does not directly reference AI or any associated technologies such as machine learning. However, the General Data Protection Regulation (and the U.K.'s 2018 Data Protection Act) does have a significant focus on large-scale automated processing of personal data, and several provisions specifically refer to the use of profiling and automated decision-making. This means data protection law applies to the use of AI to provide a prediction or recommendation about someone. The ICO suggests compliance teams (including the DPO) and senior management should expect assurances from the product manager that the system the organization is using provides the appropriate level of explanation to decision recipients.
In a blog post today, Google laid out the concept of federated analytics, a practice of applying data science methods to the analysis of raw data that's stored locally on edge devices. As the tech giant explains, it works by running local computations over a device's data and making only the aggregated results -- not the data from the particular device -- available to authorized engineers. While federated analytics is closely related to federated learning, an AI technique that trains an algorithm across multiple devices holding local samples, it only supports basic data science needs. It's "federated learning lite" -- federated analytics enables companies to analyze user behaviors in a privacy-preserving and secure way, which could lead to better products. Google for its part uses federated techniques to power Gboard's word suggestions and Android Messages' Smart Reply feature.
Why has there been such a sudden explosion of Machine Learning and Artificial Intelligence in security? The truth is that these technologies have been underpinning many security tools for years. Frankly, both tools are necessary precisely because there has been such a rapid increase in the number and complexity of attacks. These attacks carry a high cost for business. Recent studies predict that global annual cybercrime costs will grow from $3 trillion in 2015 to $6 trillion annually by 2021.