Motion planning problems can be simplified by admissible projections of the configuration space to sequences of lower-dimensional quotient-spaces, called sequential simplifications. To exploit sequential simplifications, we present the Quotient-space Rapidly-exploring Random Trees (QRRT) algorithm. QRRT takes as input a start and a goal configuration, and a sequence of quotient-spaces. The algorithm grows trees on the quotient-spaces both sequentially and simultaneously to guarantee a dense coverage. QRRT is shown to be (1) probabilistically complete, and (2) can reduce the runtime by at least one order of magnitude. However, we show in experiments that the runtime varies substantially between different quotient-space sequences. To find out why, we perform an additional experiment, showing that the more narrow an environment, the more a quotient-space sequence can reduce runtime.
Simplifying text's grammar and structure is a useful skill most of us acquire in school, but AI typically has a tougher go of it, owing to a lack of linguistic knowledge. That said, scientists at Facebook AI Research and Inria are progressing toward a simplification model dubbed ACCESS (AudienCe-CEntric Sentence Simplification), which they claim enables customization of text length, amount of paraphrasing, lexical complexity, syntactic complexity, and other parameters while preserving coherency. "Text simplification can be beneficial for people with cognitive disabilities, such as aphasia, dyslexia, and autism, but also for second language learners and people with low literacy," wrote the researchers in a preprint paper detailing their work. "The type of simplification needed for each of these audiences is different … Yet, research in text simplification has been mostly focused on developing models that generate a single generic simplification for a given source text with no possibility to adapt outputs for the needs of various target populations. To this end, the team tapped seq2seq, a general-purpose encoder-decoder framework that takes data and its context as inputs.
IRS Commissioner John Koskinen said Wednesday that the agency fully supports tax reform efforts, acknowledging the existing U.S. tax code is such a "mess" that even he struggles to fill out his federal returns. It is difficult for the IRS commissioner to fill out his tax returns," Koskinen, who has a Yale law degree, told Fox News at a Washington, D.C., conference on improving government services. Koskinen, who became commissioner in 2013 under then-President Barack Obama, made clear the agency doesn't meddle in policy but said he started talking with the new administration about tax reform just weeks after then-candidate Donald Trump was elected and the IRS continues to offer help. "One of the questions was, 'If major tax reform simplification was on the table, would there be anybody in the IRS opposed to that?' I was quick to respond, after having talked to employees, that nobody is more supportive of tax simplification than the IRS," he said. Koskinen said he started soliciting suggestions from employees last year about ways to simplify and streamline the code and called such reform "one of our highest priorities."
The obvious result of the growth in AI related business is a global lack of sufficiently trained and skilled AI software development experts who are needed by the millions of projects around the world. At the same time, the currently very high complexity of AI and machine learning application development prohibits many people, including those from Computer Science related fields, from entering the AI industry, and drives the need for Powerbrains next Generation AI development environment AI-IDE – simplifying the development of AI based solutions drastically. To support PowerBrains's vision to enable the easy implementation of Artificial Intelligence (AI), our core product is PowerBrains's Integrated AI Software Development Environment (AI-IDE) which enables rapid AI design, implementation, limitation, training, test and validation, making PowerBrains a'Software Factory' for digital'brains' – trained software structures.
Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. We present a simple BERT-based LS approach that makes use of the pre-trained unsupervised deep bidirectional representations BERT. We feed the given sentence masked the complex word into the masking language model of BERT to generate candidate substitutions. By considering the whole sentence, the generated simpler alternatives are easier to hold cohesion and coherence of a sentence. Experimental results show that our approach obtains obvious improvement on standard LS benchmark.