Field canals improvement projects (FCIPs) are one of the ambitious projects constructed to save fresh water. To finance this project, Conceptual cost models are important to accurately predict preliminary costs at the early stages of the project. The first step is to develop a conceptual cost model to identify key cost drivers affecting the project. Therefore, input variables selection remains an important part of model development, as the poor variables selection can decrease model precision. The study discovered the most important drivers of FCIPs based on a qualitative approach and a quantitative approach. Subsequently, the study has developed a parametric cost model based on machine learning methods such as regression methods, artificial neural networks, fuzzy model and case-based reasoning.
A week after Donald Trump's election, a thirty-year-old cognitive scientist named Maya Shankar purchased a plane ticket to Flint, Michigan. Shankar held one of the more unorthodox jobs in the Obama White House, running the Social and Behavioral Sciences Team, also known as the President's "nudge unit." When she launched the team, in early 2014, it felt, Shankar recalls, "like a startup in my parents' basement"--no budget, no mandate, no bona-fide employees. Within two years, the small group of scientists had become a staff of dozens--including an agricultural economist, an industrial psychologist, and "human-centered designers"--working with more than twenty federal agencies on seventy projects, from fixing gaps in veterans' health care to relieving student debt. Usually, the initiatives had, at their core, one question: Could the growing body of knowledge about the quirks of the human brain be used to improve public policy? For months, Shankar had been thinking about how to bring behavioral science to bear on the problems in Flint, where a crisis stemming from lead contamination of the drinking water had stretched on for almost two years. She wondered if lessons from the beleaguered city could inform the Administration's approach to the broader threat posed by lead across America--in pipes, in paint, in dust, and in soil. "Flint is not the only place poisoning kids," Shankar said. In recent years, behavioral science has become a voguish field. In 2002, the Israeli psychologist Daniel Kahneman won a Nobel Prize in Economic Sciences for his work with a colleague, Amos Tversky, exploring the peculiarities of human decision-making in the face of uncertainty. A basic premise of the discipline they'd helped to create was that people's cognition is bias-prone, and susceptible to the cognitive equivalent of optical illusions. As a result, small tweaks of presentation or circumstance could make a major difference: if a judge rendered a decision about granting parole just before a meal, the inmate's odds for a favorable outcome dipped to near zero; just after the judge ate, the chances rose to around sixty-five per cent. Grocers had learned that they could sell double the amount of soup if they placed a sign above their cans reading "limit of 12 per person." But, for all the field's potential, its advances seemed mostly to have served the private sector. A prominent exception was the "nudge," a notion advanced by the legal scholar Cass R. Sunstein, now at Harvard Law School, and the University of Chicago behavioral economist Richard Thaler, in their 2008 best-seller "Nudge: Improving Decisions About Health, Wealth, and Happiness."
We address the problem of propositional logic-based abduction, i.e., the problem of searching for a best explanation for a given propositional observation according to a given propositional knowledge base. We give a general algorithm, based on the notion of projection; then we study restrictions over the representations of the knowledge base and of the query, and find new polynomial classes of abduction problems.
This dissertation presents several new methods of supervised and unsupervised learning of word sense disambiguation models. The supervised methods focus on performing model searches through a space of probabilistic models, and the unsupervised methods rely on the use of Gibbs Sampling and the Expectation Maximization (EM) algorithm. In both the supervised and unsupervised case, the Naive Bayesian model is found to perform well. An explanation for this success is presented in terms of learning rates and bias-variance decompositions.
The field of image reconstruction has undergone four waves of methods. The first wave was analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. The second wave was iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. The third wave of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. The fourth wave of methods replaces mathematically designed models of signals and processes with data-driven or adaptive models inspired by the field of machine learning. This paper reviews the progress in image reconstruction methods with focus on the two most recent trends: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.