If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Team meetings are an essential part of a company culture that helps build teamwork but we all know that they could be more productive – if only the mechanics of the meeting itself were set up to be more helpful and less distracting. Inspiration and spontaneity can get stifled by having to click through presentations, pass out documents, or by making sure action points are duly noted. So let's face it: conventionally-run workplace meetings are unfit for purpose in agile company cultures. This year is seeing development trends toward AI assistance technologies that support truly collaborative meetings that will transform how work colleagues interact – and will boost productivity. AI is revolutionising how we communicate using collaboration tools.
One year ago, we wrote about the world's first robot lawyer. It is a website with a chatbot that started off with a single and free legal service: helping to appeal unfair parking tickets. When the article was published, the services was available in the UK, and in New York and Seattle. At the time, it had helped overturn traffic tickets to the value of 4 million dollars. Apart from appealing parking tickets, the website could already assist you, too, in claiming compensation if your flight was delayed.
Some people worry that hackers could infiltrate their smart speakers and spy on them, but that hasn't been the practical reality -- not for Amazon's Echo, at least. A team of researchers from China's Tencent has come about as close as you can get right now, however. They've disclosed an attack on the Echo that uses both a modified speaker and a string of Alexa web interface vulnerabilities to remotely eavesdrop on regular models. It sounds nefarious, but it requires more steps than would be viable for most intruders. The team created a rogue Echo by removing a flash memory chip from the device, modifying its firmware to get root access, and soldering it back on its circuit board.
One of my biggest complaints about terminology in the industry is the claim that data from conversations is "unstructured data". After all, how do people communicate, either in voice or in a written language, if there was no structure that aids meaning? Syntax is the structure of language, and it clearly aids in defining semantics, or the meaning of the communications. To understand how computers are rapidly improving, it's important to look at how natural language is different from what computers have historically processed. From flat file sequential data storage models to relational databases (RDBMS), there is a decade's long history of rigidly structured data.
Last Sunday, a particularly unusual DotA 2 tournament took place. DotA, a complicated, real-time strategy game, is among the most popular e-sports in the world. The five players of one team--Blitz, Cap, Fogged, Merlini, and MoonMeander--were ranked in the 99.95th percentile, inarguably among the best DotA 2 players in the world. However, their opponent still defeated them in two out three games, winning the tournament. An evenly matched game is supposed to take 45 minutes, but these two were over in 14 and 21 minutes, respectively.
Doctors practice medicine to deliver care, not do data entry. Yet in the era of electronic medical records (EMRs), for every hour spent with a patient, physicians spend nearly two hours on paperwork. What if technology could take care of the paperwork for us? Record-keeping systems in health care were built for back-office functions, not bedside medicine. Most EMR vendors started out building products to collect payments and schedule appointments.
No, I won't use the Amazon Echo to buy things. The worry of Alexa messing things up isn't worth the convenience. Although I've been blogging at ZDNet for a couple of months about Amazon, Alexa, and other voice-first devices, this post marks a beginning of sorts. Due to the graciousness of my friend and CRM Playaz partner Paul Greenberg, I have been doing a weekly guest post on his very influential CRM industry blog, Social CRM: The Conversation. But this post is my first one under a new blog I'm calling Voices Carry.
"At IBC 2018 we will launch VSNCrea, an HTML5 and cloud-based version of our previous software VSNCreaTV for traffic and scheduling, and our VSNExplorer MAM media management platform integrated with the artificial intelligence systems of IBM Watson, Google Cloud, Microsoft Azure and ETIQMEDIA for automatic metadata detection. These new releases further underscore our ongoing commitment to supporting the worldwide media and entertainment industry through an increasingly sophisticated portfolio of cloud-based and innovative solutions."-- VSNCrea is the company's new software for TV, radio and second-screen traffic and scheduling. It enables the management of a company's content production catalog, either owned or acquired from third parties, as well as its advertising, production workflows, programming and broadcast planning -- all from a unique user interface. VSNCrea has been completely redesigned to offer broadcasters a brand-new, modern and user-friendly web interface that allows them to make quick and accurate decisions about when to broadcast a certain piece of content, thanks to its unified functionalities and workflows in one single interface.
Here, we will explore and teach you about the incredible user experience opportunities which you can take advantage of when designing for interaction beyond the classical Graphical User Interface (GUI). Non-visual User Interaction (no-UI) is pioneered by the ground-breaking work of researchers who have realized that, in today's world, we are surrounded by computers and applications that constantly require our attention: smartphones, tablets, laptops and smart-TVs competing for brief moments of our time to notify us about an event or to request our action. Staying abreast of developments will turbo-charge your skill set, so you can access users in more ingenious ways. The bulk of these attention requests and actions take place through interaction with Graphical User Interfaces, peppered with a short audio or vibration cue here and there. However, rich user experiences are not only dependent on good visual design: they can take advantage of the context awareness, sensors and multimodal output capabilities of modern computing devices.
Tensorflow is a low-level deep learning package which requires users to deal with many complicated elements to construct a successful model. However, tensorflow is also powerful for production that's why most companies choose tensorflow as their major platforms. On the other hand, Keras provides a user-friendly API to help users quickly build complicated deep learning models but it is not appropriate for making products. Can we build our models in Keras and output it to tensorflow compatiable format (Protocol Buffers .pb In this tutorial, I will show to how to make it step-by-step.