If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
What is'platformisation' and how does it relate to digital manufacturing? How can cloud-based design help steelmakers improve efficiency and reduce costs? How far can we go with'deep machine learning' without losing our grip on ethical responsibility and what exactly is'knowledge engineering'? These are all questions that need to be answered if steelmakers are going to gain a greater understanding of the world surrounding Industry 4.0 and its associated technologies. Augmented reality, robotics, cyber-enabled design and manufacturing – they are all subjects that need to be'top of mind' in the steel industry of the future.
Moving people and cargo around the globe, safely and on time, is a logistical challenge that draws on vast amounts of data. This data is a powerful but under-leveraged resource that can be put to greater use with artificial intelligence (AI). Think more efficient fleets, better route and capacity planning, and smoother passenger bookings and deliveries when faced with potential service disruptions. You may have heard the terms analytics, advanced analytics, machine learning and AI. AI is often built from machine-learning algorithms, which owe their effectiveness to training data.
An employer in Spain may not be able to fire a worker caught on a surveillance camera doing something prohibited if the company hasn't informed workers about the video system and its purpose, according to a recent trial court decision. In a case involving an employee fired after a security camera captured him in a parking-lot fight after work hours, a Pamplona labor court ruled that the video evidence was inadmissible under the European Union's General Data Protection Regulation (GDPR) and case law from the European Court of Human Rights (ECHR). "The judgment is of great interest since it is the first ruling by a Spanish court on the validity that can be given to the evidence of video recordings after the publication of the new Spanish Data Protection Law and also an interpretation of the new European Data Protection Regulation," according to a blog post from Manuel Vargas of Barcelona's Marti & Associats law firm. Under Spain's own data-protection law, employers who record a worker doing something illegal are considered to have fulfilled their duty to inform so long as they have posted a sign identifying a video surveillance zone, Vargas wrote. He also noted that recent case law from the Spanish Supreme Court endorses the idea that employers aren't obligated to notify workers that they plan to use video cameras to monitor their activity for possible disciplinary purposes.
It admittedly sounds a little like Big Brother, that a robot can tell significant things about your personality merely by looking into your eyes. Yet, that is the hiring territory that we are fast approaching – although we may not be sitting across from androids in interviews anytime soon. The use of artificial intelligence in making HR decisions is, while fraught with peril, not without its promising aspects. In an era when it is increasingly difficult for businesses to unearth the best job candidates, we may yet see the day when technology makes it possible to separate good from bad in the blink of an eye. Despite caveats about security and privacy, relying on AI would appear to be a method far superior to digging through a pile of resumes or asking ice-breaking questions like, "What's the last book you read?" Hiring good people – people who are talented, agreeable and work well with their co-workers – goes a long way toward nipping workplace conflicts in the bud.
Ultimate Systems is a new AI startup with technology that is ready to disrupt major industries, initially with monitoring and surveillance systems, both civilian and military. Would you like to find out more? We've developed a new, state-of-the-art technology leveraging the latest advancements in Artificial Intelligence (AI) backed by Machine Learning (ML) and Deep Learning (DL) to monitor images or any other data source, and raise an alert when something of interest is detected. Our core technology is useful in a broad range of applications. Over the next few weeks we'll be uploading demo videos showing some of our working prototypes, as proof of concept -- so watch this space. For example, one working prototype is a CCTV platform that detects weapons, such as a knife or gun, with high reliability (demonstrated through 1,000 different test scenarios).
Fox News Flash top headlines for June 11 are here. Check out what's clicking on Foxnews.com Researchers are showing off a creepy new software that uses machine learning to allow people to add, delete or change the words coming out of someone's mouth. The work is the latest evidence that our ability to edit what gets said in videos and create so-called deepfakes is becoming easier, posing a potential problem for election integrity and the overall battle against online disinformation. The researchers, who come from Stanford University, the Max Planck Institute for Informatics, Princeton University and Adobe Research, published a number of examples showing off the technology.
More organizations are adopting artificial intelligence (AI). Fourteen percent of global CIOs have already deployed AI and 48% will deploy it in 2019 or by 2020, according to Gartner's 2019 CIO Agenda survey. "While adoption is increasing, some organizations are still questioning the business impact and benefits. Today, we witness three barriers to the adoption of AI," says Brian Manusama, Senior Director Analyst, Gartner.
The rise of killer robots is now unstoppable and a new digital Geneva Convention is essential to protect the world from the growing threat they pose, according to the President of the world's biggest technology company. In an interview with The Telegraph, Brad Smith, president of Microsoft, said the use of'lethal autonomous weapon systems' poses a host of new ethical questions which need to be considered by governments as a matter of urgency. He said the rapidly advancing technology, in which flying, swimming or walking drones can be equipped with lethal weapons systems – missiles, bombs or guns – which could be programmed to operate entirely or partially autonomously, "ultimately will spread… to many countries". The US, China, Israel, South Korea, Russia and the UK are all developing weapon systems with a significant degree of autonomy in the critical functions of selecting and attacking targets. The technology is a growing focus for many militaries because replacing troops with machines can make the decision to go to war easier.