Goto

Collaborating Authors

Graph Matching in Theory and Practice

Communications of the ACM

Back in 1979, two scientists wrote a seminal textbook on computational complexity theory, describing how some problems are hard to solve. The known algorithms for handling them grow in complexity so fast that no computer can be guaranteed to solve even moderately sized problems in the lifetime of the universe. While most problems could be deemed either relatively easy or hard for a computer to solve, a few fell into a strange nether region where they could not be classified as either. The authors, Michael Garey and David S. Johnson, helpfully provided an appendix listing a dozen problems not known to fit into one category or the other. "The very first one that's listed is graph isomorphism," says Lance Fortnow, chair of computer science at the Georgia Institute of Technology.


When Quantum Computation Meets Data Science: Making Data Science Quantum · Issue 4.1, Winter 2022

#artificialintelligence

The interface of statistics and computation is a signature issue in data science, which characteristically uses statistics, computation, and domain science knowledge to extract information and insights from data for the solving of big data problems. The discovery of solutions to complicated real problems with big data requires the development of statistical methods to analyze the data and computational algorithms to implement data analysis procedures. The statistical analysis is often computationally challenging. For example, statistical approaches that are mathematically optimal may not be computationally tractable, and data analysis methods that are computationally efficient may not be statistically optimal. Thus, sound statistics and tractable computation constitute fundamental but often competing pillars of data science, and a tradeoff is needed to balance statistical efficiency and computation efficiency.


Is Complexity Important for Philosophy of Mind?

arXiv.org Artificial Intelligence

Computational complexity has often been ignored in philosophy of mind, in philosophical artificial intelligence studies. The purpose of this paper is threefold. First and foremost, to show the importance of complexity rather than computability in philosophical and AI problems. Second, to rephrase the notion of computability in terms of solvability, i.e. treating computability as non-sufficient for establishing intelligence. The Church-Turing thesis is therefore revisited and rephrased in order to capture the ontological background of spatial and temporal complexity. Third, to emphasize ontological differences between different time complexities, which seem to provide a solid base towards better understanding of artificial intelligence in general.


Quantum Leap

Communications of the ACM

Hopes for quantum computing have long been buoyed by the existence of algorithms that would solve some particularly challenging problems with exponentially fewer operations than any known algorithm for conventional computers. Many experts believe, but have been unable to prove, that these problems will resist even the cleverest non-quantum algorithms. Recently, researchers have shown the strongest evidence yet that even if conventional computers were made much more powerful, they probably still could not efficiently solve some problems that a quantum computer could. That such problems exist is a long-standing conjecture about the greater capability of quantum computers. "It was really the first big conjecture in quantum complexity theory," said computer scientist Umesh Vazirani of the University of California, Berkeley, who proposed the conjecture with then-student Ethan Bernstein in the 1993 paper (updated in 1997) that established the field.