Rather than relying on exit interviews and their comparisons to occasional employee surveys to determine engagement, organizations can turn instead to big data and advanced analytics to identify those workers at greatest risk of quitting. A new Harvard Business Review article outlines how applying machine learning algorithms to turnover data and employee information can provide a much more accurate picture of workplace satisfaction. This measure of "turnover propensity" comprised two main indicators: turnover shocks, which are organizational and personal events that cause workers to reconsider their jobs, and job embeddedness, which describes an employee's social ties in their workplace and interest in the work they do. Though achieving this kind of "proactive anticipation" will require a sizable investment of time and effort to develop the necessary data and algorithms, the payoff will likely be worth it: "Leaders can proactively engage valued employees at risk of leaving through interviews, to better understand how the firm can increase the odds that they stay," per HBR. More articles on leadership and management: Can your anesthesia department handle NORA?
I become addicted to learning a new language with the Lingvist language software within a day of using it. Census data that shows that 231 million Americans speak only English at home and do not know another language well enough to communicate in it. But how can you learn a new language without going back to school? Machine learning could be a solution to this problem, by cutting down on the 200 hours it takes to learn a language using traditional methods. Language company Lingvist intends to decrease this time by using machine learning software to adapt to your learning style.
This headline may seem a bit odd to you. Since data science has a huge impact on today's businesses, the demand for DS experts is growing. At the moment I'm writing this, there are 144,527 data science jobs on LinkedIn alone. But still, it's important to keep your finger on the pulse of the industry to be aware of the fastest and most efficient data science solutions. To help you out, our data-obsessed CV Compiler team analyzed some vacancies and defined the data science employment trends of 2019.
The academic medical center of the University of Michigan is leveraging investments in artificial intelligence, machine learning and advanced analytics to unlock the value of its health data. According to Andrew Rosenberg, MD, chief information officer for Michigan Medicine, the organization currently has 34 ongoing AI and machine leaning projects, 28 of which have principal investigators. "There's a lot of collaboration around these projects--as there should be for the diversity of thought and background needed to deal with complex problems--working with at least seven other U of M schools," Rosenberg told the Machine Learning for Health Care conference on Friday in Ann Arbor, Mich. "That's one of the powers that we enjoy." One of the machine learning projects cited by Rosenberg leverages a combination of electronic health records, monitor data and analytics to predict acute hemodynamic instability--when blood flow drops and deprives the body of oxygen--which is one of the most common causes of death for critically ill or injured patients.
The hiring process is an important, but time-consuming, process. It can be exhausting to sort through applications to determine which candidates have the necessary qualifications for your job opening. One time-saving technology recruiters and hiring managers are using is artificial intelligence (AI), which can be especially useful when sorting through a large number of applications. However, there are also some potential downsides to consider when using AI. We asked 10 Forbes Coaches Council members what issues AI raises with increased use in the hiring process, and how you can best work around those challenges.
Most financial institutions know it's critical to manage the ever-increasing amounts of accessible data, but many miss the potential in using that data in innovative ways. Financial institutions have a plethora of data they can access, either through their own systems or through public sources. However, many can't -- or won't -- exploit the large volumes of data, particularly the "owned" data that an organization holds about customers. This kind of data is typically called customer relationship management data, such as the purchase history tied to app installs, email addresses and postal addresses. Though financial institutions maintain and collect massive volumes of data, many firms are restricted from fully using that data because they are required to comply with stringent regulations around what can and cannot be done with customer data.
Thanks to major improvements in computing power, increasingly sophisticated algorithms, and an unprecedented amount of data, artificial intelligence (AI) has started generating significant economic value. With algorithms that make predictions from large amounts of data, AI contributes, by some estimates, about $2 trillion to today's global economy. It could add as much as $16 trillion by 2030, making it more than 10 percent of gross world product. AI's outsize contribution to global economic growth has important implications for geopolitics. Around the world, governments are ramping up their investments in AI research and development (R&D), infrastructure, talent, and product development.
For many companies, when it comes to implementing AI, the typical approach is to use certain features from existing software platforms (say from Salesforce.com's Einstein). But then there are those companies that are building their own models. Yes, this can move the needle, leading to major benefits. At the same time, there are clear risks and expenses. Let's face it, you need to form a team, prepare the data, develop and test models, and then deploy the system.
Amazon Web Services (AWS) launched general availability of its fully-managed Lake Formation platform designed to help organizations better manage their data lakes. The service helps with the building, securing, and managing of those data repositories. Lake Formation, which was initially announced at the AWS re:Invent show late last year, is built on AWS' Glue extract, transform, and load (ETL) service. It automates the provisioning and configuring of storage; crawls the data to extract schema and metadata tags; automatically optimizes the partitioning of the data; and transforms the data into formats like Apache Parquet and ORC for easier analytics. Data can be ingested from different sources using pre-defined templates.