Facebook's commitment to the wider dev community remains as strong as ever, if recent developments are any indication. Following the open-sourcing of image processing library Spectrum in January, natural language processing modeling framework PyText late last year, and AI reinforcement learning platform Horizon in November, Facebook's AI research division today announced that Pythia, a modular plug-and-play framework that enables data scientists to quickly build, reproduce, and benchmark AI models, is now freely available on GitHub. As Facebook explains in a blog post, Pythia -- which is built atop the company's PyTorch machine learning framework -- is principally intended for vision and language tasks, such as answering questions related to visual data and automatically generating image captions. It incorporates elements of Facebook AI Research's top entries in AI competitions like LoRRA, a vision and language model that won both the VQA Challenge 2018 and Vizwiz Challenge 2018, and it's capable of showing how previous state-of-the-art AI systems achieved top benchmark results and comparing their performance to that of new models. Pythia also supports distributed training and a variety of data sets, as well as custom losses, metrics, scheduling, and optimizers.
Rice University statistician Genevera Allen knew she was raising an important issue when she spoke earlier this month at the American Association for the Advancement of Science (AAAS) annual meeting in Washington, but she was surprised by the magnitude of the response. Allen, associate professor of statistics and founding director of Rice's Center for Transforming Data to Knowledge (D2K Lab), used the forum to raise awareness about the potential lack of reproducibility of data-driven discoveries produced by machine learning (ML). She cautioned her audience not to assume that today's scientific discoveries made via ML are accurate or reproducible. She said that many commonly used ML techniques are designed to always make a prediction and are not designed to report on the uncertainty of the finding. Her comments garnered worldwide media attention, with some commentators questioning the value of ML in data science.
Business loves buzzwords, and there's been no bigger buzzword recently than artificial intelligence. AI, of course, lets companies optimize their operations, business models and customer experiences around data-driven insights, while developing products and services that align more closely with customer needs. Now that leading cloud service providers are providing AI-driven machine learning and deep learning training platforms--customized to business user data and accessed as cloud-hosted application programming interfaces--companies of all sizes can seize the benefits of AI. By offering an alternative to on-premise AI solutions, cloud providers are giving small businesses the same advantages their larger counterparts are looking to exploit. Among the valuable AI tools at their disposal are natural language processing, image recognition, translation, search functions and data analytics.
How should modern enterprises go about implementing artificial intelligence? Enterprises in every industry would want to adopt AI. I've yet to speak with an executive who hasn't considered how AI could impact their team or company. They know it can deliver unparalleled operational efficiency, enable new business models, delight customers, and ultimately drive their bottom line. Despite this, only 30% of enterprises report piloting an AI project.
For all of the hype about artificial intelligence (AI), most software is still geared toward engineers. To demystify AI and unlock its benefits, the MIT Quest for Intelligence created the Quest Bridge to bring new intelligence tools and ideas into classrooms, labs, and homes. This spring, more than a dozen Undergraduate Research Opportunities Program (UROP) students joined the project in its mission to make AI accessible to all. Undergraduates worked on applications designed to teach kids about AI, improve access to AI programs and infrastructure, and harness AI to improve literacy and mental health. Six projects are highlighted here.
Companies face issues with training data quality and labeling when launching AI and machine learning initiatives, according to a Dimensional Research report. The worldwide spending on artificial intelligence (AI) systems is predicted to hit $35.8 billion in 2019, according to IDC. This increased spending is no surprise: With digital transformation initiatives critical for business survival, companies are making large investments in advanced technologies. However, nearly eight out of 10 organizations engaged in AI and machine learning said that projects have stalled, according to a Dimensional Research report. The majority (96%) of these organizations said they have run into problems with data quality, data labeling necessary to train AI, and building model confidence.
One of the two task forces announced to be formed by the US House Committee on financial services will be investigating the use of artificial intelligence technologies (AI) for FinTech. The focus of the task force will be to examine digital identification technologies using AI to reduce fraud. It will also look into issues such as regulating ML in the financial services industry, risks associated with algorithms & big data, and the impact of automation on jobs and the economy in the US. AI has been one of the hottest technologies used by emerging FinTech players. It is used in automation, social media analytics & intelligence tools, cybersecurity, fraud prevention, and other areas.
How they describe themselves: Actionable analytics is the backbone of NYSE-listed Enova International, a global online lending company. In the past 14 years, the analytics team has applied predictive and prescriptive analytics to fraud detection, credit risk management, and customer retention and built the Colossus Digital Decisioning Platform to automate and optimize many of Enova's operational decisions. As a result, Enova has extended over $20 billion in credit to over 5 million customers worldwide. Enova Decisions was launched in 2016 to help businesses in financial services, insurance, healthcare, telecommunications, and higher education achieve similar outcomes by leveraging the same analytics expertise and decisioning technology. How they describe their product/innovation: Enova Decisions Cloud is a complete decision management suite where clients can integrate 1st and 3rd party data, deploy machine learning models, manage business rules, monitor performance, and continuously optimize performance.
Within the next decade, healthcare will see emerging technologies including artificial intelligence, cloud computing, predictive analytics and blockchain spurring billions of dollars in value increases, according to a new McKinsey & Company report on this tech-driven "era of exponential growth." For these innovations to impact areas like clinical productivity, care delivery and waste reduction, though, certain value pools will need to be disrupted across the entire industry. Here are four possible disruptive changes that could transform healthcare in the coming years, according to McKinsey. More articles about AI: How AI can enhance clinical productivity IBM Research using self-driving car tech to promote seniors' wellbeing Bill calls for $2.2B in federal AI funding
I have worked with 12 startups. They have spanned verticals from fintech and healthcare to ed-tech and biotech, and ranged from pre-seed to post acquisition. My roles have also varied, from deep-in-the-weeds employee #1 to head of data science and strategic advisor. In all of them I worked on interesting machine learning and data science problems. All tried to build great products.