mcul
Scienaptic partners with Michigan CU League
This alliance will enable the company to offer its AI-powered credit underwriting technology to credit unions throughout Michigan. Organized in 1934, MCUL has a proud tradition of innovation and leadership among the nation's credit union trade associations. The partnership with Scienaptic will enable MCUL to deepen its commitment towards serving its members, expanding their market presence and improving their financial status. Scienaptic's current Michigan clients include Advantage One Credit Union and 4Front Credit Union. In a recent press release, Nicholas Schmitter, Chief Lending Officer at Advantage One, said, "Scienaptic's AI-powered credit decisioning platform will allow us to go the extra mile. It will give better access to credit for many deserving members."
- North America > United States > New York (0.06)
- North America > United States > Michigan > Ingham County > Lansing (0.06)
- Banking & Finance > Insurance (0.44)
- Banking & Finance > Loans (0.34)
Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models
A weakly-supervised learning framework named as complementary-label learning has been proposed recently, where each sample is equipped with a single complementary label that denotes one of the classes the sample does not belong to. However, the existing complementary-label learning methods cannot learn from the easily accessible unlabeled samples and samples with multiple complementary labels, which are more informative. In this paper, to remove these limitations, we propose the novel multi-complementary and unlabeled learning framework that allows unbiased estimation of classification risk from samples with any number of complementary labels and unlabeled samples, for arbitrary loss functions and models. We first give an unbiased estimator of the classification risk from samples with multiple complementary labels, and then further improve the estimator by incorporating unlabeled samples into the risk formulation. The estimation error bounds show that the proposed methods are in the optimal parametric convergence rate. Finally, the experiments on both linear and deep models show the effectiveness of our methods.