fico
Contribuci\'on de la sem\'antica combinatoria al desarrollo de herramientas digitales multiling\"ues
This paper describes how the field of Combinatorial Semantics has contributed to the design of three prototypes for the automatic generation of argument patterns in nominal phrases in Spanish, French and German (Xera, Combinatoria and CombiContext). It also shows the importance of knowing about the argument syntactic-semantic interface in a production situation in the context of foreign languages. After a descriptive section on the design, typologie and information levels of the resources, there follows an explanation of the central role of the combinatorial meaning (roles and ontological features). The study deals with different semantic f ilters applied in the selection, organization and expansion of the lexicon, being these key pieces for the generation of grammatically correct and semantically acceptable mono- and biargumental nominal phrases.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.05)
- Europe > Spain > Galicia > A Coruña Province > Santiago de Compostela (0.05)
- Europe > France (0.05)
- (14 more...)
4 AI Predictions for 2023: From the Great Correction to Practical AI
Enthusiasm for self-driving cars has waned and automakers are rethinking or exiting their robo-taxi plans. This is just one sign that we are in the middle of the Great Correction in AI -- a period when wild ambitions and moon-shot ideas are being replaced by more realistic approaches to artificial intelligence and its attendant machine learning (ML) models, algorithms, and neural networks. I'm calling this the new pragmatism of Practical Artificial Intelligence, and I predict this technology will rise in 2023 like a phoenix from the ashes of years of irrational exuberance around artificial intelligence. Under the umbrella of practicality, companies will strategically rethink how they use artificial intelligence, an attitudinal shift that will filter down to implementation, AI and machine learning model management, and governance. Generative AI -- in which algorithms create synthetic data --has been a big buzzword lately, with slick image-generation capabilities grabbing headlines.
- Automobiles & Trucks (0.55)
- Information Technology (0.52)
When the AI goes haywire, bring on the humans
OAKLAND, Calif., Oct 13 (Reuters) - Used by two-thirds of the world's 100 biggest banks to aid lending decisions, credit scoring giant Fair Isaac Corp (FICO.N) and its artificial intelligence software can wreak havoc if something goes wrong. That crisis nearly came to pass early in the pandemic. As FICO recounted to Reuters, the Bozeman, Montana company's AI tools for helping banks identify credit and debit card fraud concluded that a surge in online shopping meant fraudsters must have been busier than usual. The AI software told banks to deny millions of legitimate purchases, at a time when consumers had been scrambling for toilet paper and other essentials. But consumers ultimately faced few denials, according to FICO.
- North America > United States > Montana > Gallatin County > Bozeman (0.25)
- North America > United States > California > Alameda County > Oakland (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- (4 more...)
- Banking & Finance (1.00)
- Law Enforcement & Public Safety > Fraud (0.35)
- Information Technology > Services (0.35)
FICO Announces Winners of Inaugural xML Challenge
FICO, the leading provider of analytics and decision management technology, together with Google and academics at UC Berkeley, Oxford, Imperial, UC Irvine and MIT, have announced the winners of the first xML Challenge at the 2018 NeurIPS workshop on Challenges and Opportunities for AI in Financial Services. Participants were challenged to create machine learning models with both high accuracy and explainability using a real-world dataset provided by FICO. Sanjeeb Dash, Oktay Gu nlu k and Dennis Wei, representing IBM Research, were this year's challenge winners. The winning team received the highest score in an empirical evaluation method that considered how useful explanations are for a data scientist with the domain knowledge in the absence of model prediction, as well as how long it takes for such a data scientist to go through the explanations. For their achievements, the IBM team earned a $5,000 prize.
- North America > United States > New York (0.06)
- North America > United States > California > Santa Clara County > San Jose (0.06)
- Information Technology (0.59)
- Media > News (0.40)
- Banking & Finance > Financial Services (0.39)
- Law > Intellectual Property & Technology Law (0.33)
AI Explainability 360: Impact and Design
This section highlights the impact of the AIX360 toolkit in the first two years since its release. It describes several different forms of impact on real problem domains and the open source community. This impact has resulted in improvements in multiple metrics: accuracy, semiconductor yield, satisfaction rate, and domain expert time. The current version of the AIX360 toolkit includes ten explainability algorithms described in Table 1 covering different ways of explaining. Explanation methods could be either local or global, where the former refers to explaining an AI model's decision for a single instance, while the latter refers to explaining a model in its entirety.
It's Time for AI to Explain Itself
First published on Aug. 10, 2021, on Hewlett Packard Enterprise's Enterprise.nxt, publishing insights about the future of technology. AI models get more accurate all the time, but even the data scientists who built them can't explain why -- and that's a problem. AI-driven algorithms are now a daily part of nearly everyone's lives. We've all grown used to machines suggesting a new series to binge on Netflix, another person to follow on Facebook, or the next thing we need to order from Amazon. They're also driving far more important decisions, like what stocks to invest in, which medical procedures to consider, or whether you qualify for a mortgage.
- Banking & Finance (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.72)
FICO Launches Executive LinkedIn Live Video Series on Operationalizing Analytics and Artificial Intelligence
Global analytics software provider, FICO, today announced its upcoming executive LinkedIn Live video series, "Coffee with Claus" and "Expect the Unexpected." Hosted by FICO Executive Vice President and Chief Technology Officer, Claus Moldt, "Coffee with Claus" will discuss the role of analytics and artificial intelligence in digital transformation, while "Expect the Unexpected" will feature FICO Chief Analytics Officer Scott Zoldi exploring a range of AI topics, such as ethics, governance, diversity, and regulation, with executive leaders. Many of today's enterprises rely on data, and further AI, to deliver a constant stream of intelligence and insight that can be applied to help them pivot in constantly changing business environments as well as address pressing everyday challenges. "With the COVID-19 pandemic accelerating countless digital transformation journeys, our goal is to ensure enterprises are deploying the data at their disposal in the most beneficial ways, some of which include a need to adopt AI to make robust and informed digital decisions," said Claus Moldt, EVP and CTO at FICO. The first episode of "Coffee with Claus," What is an AI Platform?, airs Tuesday, June 22, 2021 at noon EST and features Forrester Analyst Mike Gualtieri.
- Health & Medicine (0.82)
- Media > News (0.40)
- Law > Intellectual Property & Technology Law (0.34)
How and Why Enterprises Must Tackle Ethical AI - InformationWeek
Bias and ethics in artificial intelligence have captured the attention of the public and some organizations following some high-profile examples of it at work. For instance, there has been work that has demonstrated bias against darker skinned and female individuals in face recognition technology and a secret AI recruiting tool at Amazon that showed bias against women, among many other examples. But when it comes to looking inside at our own houses -- or businesses -- we may not be very far along in prioritizing AI ethics or taking measures to mitigate bias in algorithms. According to a new report from FICO, a global analytics software firm, 65% of C-level analytics and data executives surveyed said that their company cannot explain how specific AI model decisions or predictions are made, and 73% have struggled to get broader executive support for prioritizing AI ethics and responsible AI practices. Only 20% actively monitor their models in production for fairness and ethics.
Fico and Corinium survey looks at responsible AI in business - Actu IA
FICO known for the "Credit score/FICO Score," an indicator used to predict credit issues, has released a report titled "The State of Responsible AI." The document is the results of a survey conducted with the help of business intelligence firm Corinium around responsible AI. The two organizations tried to understand the aspects that enable a company to adopt more responsible, ethical, transparent and secure AI. As part of an initiative led by Corinium and FICO, a survey was conducted on companies exploiting artificial intelligence on a daily basis. The objective was to better understand how companies are using AI and whether the issues of ethics, responsibility, and respect for the interests of customers are assimilated by these groups.
How Does Your AI Work? Nearly Two-Thirds Can't Say, Survey Finds - AI Summary
Nearly two-thirds of C-level AI leaders can't explain how specific AI decisions or predictions are made, according to a new survey on AI ethics by FICO, which says there is room for improvement. FICO hired Corinium to query 100 AI leaders for its new study, called "The State of Responsible AI: 2021," which the credit report company released today. More than two thirds of survey-takers say the processes they have to ensure AI models comply with regulations are ineffective, while nine out of 10 leaders who took the survey say inefficient monitoring of models presents a barrier to AI adoption. Seeing as how the regulatory environment is still developing, it's concerning that 43% of respondents in FICO's study found that "they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods," such as audience segmentation models, facial recognition models, and recommendation systems, the company said. At a time when AI is making life-altering decisions for their customers and stakeholders, the lack of awareness of the ethical and fairness concerns around AI poses a serious risk to companies, says Scott Zoldi, FICO's chief analytics officer.
- Law (0.63)
- Government (0.63)