Know Your Stuff is a new column that unlocks the hidden secrets about the everyday products you own. Dental care has come a long way since we were first using bone and hog hair brushes in sixth-century China, but based on some of the raised eyebrows I've seen at the recent CES electronics show, some might argue that the pendulum has swung too far in the other direction. Oral-B and Colgate, two household names in oral hygiene, each released state-of-the-art toothbrushes that promise to get your teeth cleaner than a standard brush. They join the ranks of dozens of other "smart brushes" that sport a list of features rivaling some laptops, which of course begs the question, "Why?" Aren't we fine with toothbrushes as they already are? Vision of the future:Is your eye the next frontier for small screen tech?
January 17, 2020 Written by: John R. Smith IBM Research has a long history as a leader in the field of Artificial Intelligence (AI). IBM's pioneering work in AI dates back to the field's inception in the 1950s, when IBM developed one of the first instances of machine learning, which was applied to the game of checkers. Since then, IBM has been responsible for achieving major milestones in AI, ranging from Deep Blue – the first chess-playing computer to defeat a reigning world champion, to Watson – the first natural language question and answering system able to win at Jeopardy!, to last year's Project Debater – the first AI system that can build persuasive arguments on its own and effectively engage in debates on complex topics. IBM's leadership in AI continued in earnest in 2019, which was notable for a growing focus on critical topics such as making trustworthy AI work in practice, creating new AI engineering paradigms to scale AI for a broader use, and continuing to advance core AI capabilities in language, speech, vision, knowledge & reasoning, human-centered AI, and more. While recent years have seen incredible progress in "narrow AI," built on technologies like deep learning, IBM Research pushed its AI research in 2019 towards developing a new foundational underpinning of AI for enterprise applications by addressing important problems like learning more from less, enabling trusted AI by ensuring the fairness, explainability, adversarial robustness, and transparency of AI systems, and integrating learning and reasoning as a way to understand more in order to do more.
Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governments–and in particular militaries–also have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the UK's Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare. The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of £4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."
Gone are the days of a room full of traders frantically executing trades trying to follow volatile market. Computer algorithms are the technology that shape the market today. Up to 70% of all trades in the United States are now performed by machines and not humans. While algorithmic trading is continuing to grow, the technology keeps improving as well. There are a number of significant technological advancements that are already being implemented and may become a part of the near future in trading.
One characteristic human foible is how easily we can falsely redefine what we experience. This flaw, called the Thomas Theorem, suggests, "If men define situations as real, they are real in their consequences." Put another way, humans not only respond to the objective features of their situations but also to their own subjective interpretations of those situations, even when these beliefs are factually wrong. Other shortcomings include our willingness to believe information that is not true and a propensity to be as easily influenced by emotional appeals as reason, as demonstrated by the "North Dakota Crash" falsehood. Machines can also be taught to exploit these flaws more effectively than humans: artificial intelligence algorithms can test what content works and what does not over and over again on millions of people at high speed, until their targets react as desired.
The coding of medical diagnosis and treatment has always been a challenging issue. Translating a patient's complex symptoms, and a clinician's efforts to address them, into a clear and unambiguous classification code was difficult even in simpler times. Now, however, hospitals and health insurance companies want very detailed information on what was wrong with a patient and the steps taken to treat them-- for clinical record-keeping, for hospital operations review and planning, and perhaps most importantly, for financial reimbursement purposes. The current international standard for medical coding is ICD-10 (the tenth version of International Classification of Disease codes), from the World Health Organization (WHO). ICD‑10 has over 14,000 codes for diagnoses.
Data science consultant Cathy O'Neil helps companies audit their algorithms for a living. And when it comes to how algorithms and artificial intelligence can enable bias in the job hiring process, she said the biggest issue isn't even with the employers themselves. A new Illinois law that aims to help job seekers understand how AI tools are used to evaluate them in video interviews recently resurfaced the debate over AI's role in recruiting. But O'Neil believes the law tries to tackle bias too late in the process. "The problem actually lies before the application comes in. The problem lies in the pipeline to match job seekers with jobs," said O'Neil, founder and CEO of O'Neil Risk Consulting & Algorithmic Auditing.
Chile is making its mark in the world, and specifically in the Latin America, with its focus on technology. But as the the country moves closer to its goal, it sees the need for more coordinated policies. That and the desire to build a more "informed and knowledge-based" society prompted the creation of its own Ministry of Science, Technology, Knowledge and Innovation in 2018. Andres Couve has spent his entire career working on research and development. A biologist with a PhD degree in Cell Biology from the prestigious Mount Sinai School of Medicine in New York, he also holds a post-doctorate in Neuroscience from University College London (UCL).
What IT team wouldn't like to have a crystal ball that could predict the IT future, letting them fix application and infrastructure performance problems before they arise? Well, the current shortage of crystal balls makes the union of artificial intelligence (AI), machine learning (ML), and utilisation forecasting the next best thing for anticipating and avoiding issues that threaten the overall health and performance of all IT infrastructure components. The significance of AI has not been lost to organisations in the United Kingdom, with 43 per cent of them believing that AI will play a big role in their operations. Utilisation forecasting is a technique that applies machine learning algorithms to produce daily usage forecasts for all utilisation volumes across CPUs, physical and virtual servers, disks, storage, bandwidth, and other network elements, enabling networking teams to manage resources proactively. This technique helps IT engineers and network admins prevent downtime caused by over-utilisation.
IDC predicts spending on AI systems will reach $97.9B in 2023, more than two and one-half times the ... [ ] $37.5B that will be spent in 2019. Machine learning's growing adoption in business across industries reflects how effective its algorithms, frameworks and techniques are at solving complex problems quickly. Open jobs requiring TensorFlow experience is a useful way to quantify how prevalent machine learning is becoming in business today. There are 4,134 open positions in the U.S. on LinkedIn that require TensorFlow expertise and 12,172 open positions worldwide as of today. Open jobs on LinkedIn requesting machine learning expertise in the U.S. further reflect its growing dominance in all businesses.