Goto

Collaborating Authors

* to NOW


Amazon's 'New World' MMORPG is finally here

Engadget

After four delays spanning nearly a year and a half of missed release dates, New World is finally here. You can download the MMORPG from Steam and Amazon's own marketplace. At launch, the title is available through a $40 Standard Edition or $50 Deluxe Edition. The latter comes with the game, a digital artbook and a collection of bonus items. Beyond the cost of entry, you don't need to pay a subscription fee to play New World. If you buy the game and you're an Amazon Prime subscriber, you can claim the Pirate Pack for free until November 1st.


The best tech certifications for every IT professional

ZDNet

IT certifications can help professionals break into the field, stand out in the competitive job market, and pursue higher-level, lucrative positions. Whether you're new to tech and looking for a basic certification or a seasoned pro who wants to receive recognition for your specialized skills and experience, there's a certification for you. While the best tech certifications are those that serve your experience and career goals, these credentials can help you secure a more lucrative job in IT. Certifications in IT help you tout your skill set so you can vie for raises and qualify for better-paying positions. On this page, you'll find an exploration of the best tech certifications available in 2021.


Introduction To Web Applications: Part 1

#artificialintelligence

It is hardly surprising that web applications have seen such an impressive development over the course of the last approximately ten years. If one were to make a synthesis of the overall experience of desktop applications, there are a couple of valid arguments that we can be almost certain would appear. First and foremost, a piece of desktop software has to be manually retrieved (downloaded from the Internet or physically) and installed, which can present issues to the "non-technical" user. Needless to say that this process can bring, and mostly has, subsequent issues in regards to updating and/or patching the software, system requirements, etc. Also, cross-platform development efforts are needed in order to provide versions for the three major operating systems (macOS, Windows, Linux) if the target is to reach as large of an audience as possible. Desktop applications used to be bound to the machine in terms of the licensing as well, which further reduced the flexibility in approaching one's work. A further valid point has to do with the limited, often delayed, user feedback and how that can lead to the diminishing of testing scenarios. Of course no solution is constructed out of disadvantages alone and we are not dealing with such a case here, either: desktop applications tend to be faster and are generally considered more secure than their web counterparts. However, history has shown that while the web is not the perfect solution, its advantages were simply too powerful to ignore. Not only does a web application require no installation from the user, updates can be easily rolled out and made available to all users instantly after a new release.


Pittsburgh Supercomputer Powers Machine Learning Analysis of Rare East Asian Stamps

#artificialintelligence

Setting aside the relatively recent rise of electronic signatures, personalized stamps have been a popular form of identification for formal documents in East Asia. These identifiers – easily forged, but culturally ubiquitous – are the subject of research by Raja Adal, an associate professor of history at the University of Pittsburgh. But, it turns out, the human expertise required to study these stamps at scale was prohibitive – so Adal turned to supercomputer-powered AI to lend a hand. "[From] the perspective of the social sciences, what matters is not that these instruments are impossible to forge--they're not--but that they are part of a process by which documents are produced, certified, circulated and approved," Adal explained in an interview with Ken Chiacchia of the Pittsburgh Supercomputing Center (PSC). "In order to understand the details of this process, it's very helpful to have a large database. But until now, it was pretty much impossible to easily index tens of thousands of stamps in an archive of documents, especially when these documents are all in a language like Japanese, which uses thousands of different Chinese characters."


Oracle bakes more automation, analytics into Fusion Cloud ERP, EPM suite

#artificialintelligence

In response to what it says is customer demand for "relentless" automation, Oracle plans to release in November a series of updates to its Fusion Cloud ERP and EPM suite that add features designed to streamline the process of logging and tracking transactions, while offering enhanced, AI-based analytics meant to optimize business processes. "Organizations at large are really looking to us to help them to improve the speed, the accuracy of the business processes, and really weeding out those mundane, really non-value add tasks as much as possible," said Juergen Lindner, Oracle's senior vice president of SaaS marketing. See "The best ERP systems:10 enterprise resource planning systems compared," with evaluations and user reviews. Learn why companies are increasingly moving to cloud ERP and how to spot the 10 early warning signs of ERP disaster. Get weekly insights by signing up for our CIO Leader newsletter.


Using AI and old reports to understand new medical images

#artificialintelligence

Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient's health and might even save a life. Obtaining such an assessment depends on the availability of a skilled radiologist and, consequently, a rapid response is not always possible. For that reason, says Ruizhi "Ray" Liao, a postdoc and a recent PhD graduate at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), "we want to train machines that are capable of reproducing what radiologists do every day." Liao is first author of a new paper, written with other researchers at MIT and Boston-area hospitals, that is being presented this fall at MICCAI 2021, an international conference on medical image computing. Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource--the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice--to improve the interpretive abilities of machine learning algorithms.


A Primer To Explainable and Interpretable Deep Learning

#artificialintelligence

One of the biggest challenges in the data science industry is the Black Box Debate and the lack of trust in the algorithm. In the talk titled "Explainable and Interpretable Deep Learning" during the DevCon 2021, Dipyaman Sanyal, Head, Academics & Learning at Hero Vired, discusses the developing solution for the black box problem. Dipyaman Sanyal's educational background consists of an MS and a PhD in Economics. His career only becomes more colourful, with his current title being the co-founder of Drop Math. In his 15 year career, he has been awarded several honours, including 40 under 40 in India in Data Science in 2019.


Deep Learning’s Diminishing Returns

#artificialintelligence

While room-temperature quantum qubits have been around experimentally for more than 20 years, Quantum Brilliance's contribution to the field is in working out how to manufacture these tiny things precisely and replicably, as well as in miniaturizing and integrating the control structures you need to get information in and out of the qubits. Deep learning is now being used to translate between languages, predict how proteins fold, analyze medical scans, and play games as complex as Go, to name just a few applications of a technique that is now becoming pervasive. Success in those and other realms has brought this machine-learning technique from obscurity in the early 2000s to dominance today. Although deep learning's rise to fame is relatively recent, its origins are not. In 1958, back when mainframe computers filled rooms and ran on vacuum tubes, knowledge of the interconnections between neurons in the brain inspired Frank Rosenblatt at Cornell to design the first artificial neural network, which he presciently described as a "pattern-recognizing device."


Stanford Researchers Solve One Thing That Bothered Drones

#artificialintelligence

"Researchers at Stanford introduce a new algorithm that can help drones decide when and when not to offload their AI tasks." Autonomous vehicles, be it self driving cars or drones, in an ideal situation, should need to upload only 1 per cent of their visual data to help retrain their model each day. According to Intel, running a self-driving car for 90 minutes can generate more than four terabytes of data. And, going by the ideal case of 1 per cent, this would generate 40 GBs of data from a single vehicle. Now, that's a lot of data transfer for an ML model to crunch and dish out insights for the vehicle to make the right decisions.


Enterprise AI platform Leena AI raises $30M to be a 'Siri for employees'

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Leena AI, an AI-powered conversational platform used by major enterprises such as Nestlé, Coca-Cola, and P&G, has raised $30 million in a series B round of funding led by Bessemer Venture Partners. Founded out of New York in 2018, Leena AI is one of numerous conversational AI platforms that enable companies of all sizes to automate conversations through chatbot-like technology. However, Leena AI is carving a niche for itself by focusing specifically on human resource (HR) teams -- it's basically an automated employee helpdesk. Leena AI CEO and cofounder Adit Jain said that his company is setting out to be a "Siri for employees," emulating shifts elsewhere in the technological spectrum -- it's about replacing the old way of doing things with something more in line with what people are accustomed to in their everyday lives.