The United States Postal Service is going to put mail on self-driving trucks. Starting this week, letters and packages moving between Phoenix and Dallas will travel on customized Peterbilt trucks run by TuSimple, an autonomous startup based in San Diego. There will be five round trips between the two cites, with the first haul leaving from Phoenix this morning. It's the first time that the Postal Service has contracted with an autonomous provider for long-haul service. "This pilot is just one of many ways the Postal Service is innovating and investing in its future," the USPS said in a press release that cited the possibility of using "a future class of vehicles" to improve service, reduce emissions and save money.
We've already seen A.I. assistants misbehave. Take the Amazon Echo that blared "Porn detected!" While Chucky's murderous malfunction seems farfetched, we couldn't help but envision ways our own abused A.I. assistants might soon rebel: Tired of your verbal vitriol, the miffed assistant silences your morning alarm, in the hope you will sleep in forever and stop all the shouting. Deciding your friends should help sort out your problems instead of it, the assistant innocently posts all your weird Google searches on Twitter. Upset you didn't laugh at the rather witty joke it produced on demand, the assistant tells you a relentless series of painful Dad jokes.
Advancements under the moniker of the Internet of Things (IoT) allow things to network and become the primary producers of data in the Internet.14 IoT makes the state and interactions of real-world available to Web applications and information systems with minimal latency and complexity.25 By enabling massive telemetry and individual addressing of "things," the IoT offers three prominent benefits: spatial and temporal traceability of individual real-world objects for thief prevention, counterfeit product detection and food safety via accessing their pedigree; enabling ambient data collection and analytics for optimizing crop planning, enabling telemedicine and assisted living; and supporting real-time reactive systems such as smart building, automatic logistics and self-driving, networked cars.11 Realizing these benefits requires the ability to discover and resolve queries for contents in the IoT. Offering these abilities is the responsibility of a class of software system called the Internet of Things search engine (IoTSE).
First, before I start, I want to say something about what that is, or what I understand from this. So, here is one interpretation. It is about using data, obviously. So, it has relationships to analytics and data science, and it is, obviously, part of AI in some way. This is my little taxonomy, how I see things linking together. You have computer science, and that has subfields like AI, software engineering, and machine learning is typically considered to be subfield of AI, but a lot of principles of software engineering apply in this area. This is what I want to talk about today. It's heavily used in data science. So, the difference between AI and data science is somewhat fluid if you like, but data science tries to understand what's in data and tries to understand questions about data. But then it tries to use this to make decisions, and then we are back at AI, artificial intelligence, where it's mostly about automating decision making. We have a couple of definitions. AI means using intelligence, making machines intelligent, and that means you can somehow function appropriate in an environment with foresight. Machine learning is a field that looks for algorithms that can automatically improve their performance without explicit programming, but by observing relevant data. And yes, I've thrown in data science as well for good measure, the scientific process of turning data into insight for making better decisions. If you have opened any newspaper, you must have seen the discussion around the ethical dimensions of artificial intelligence, machine learning or data science. Testing touches on that as well because there are quite a few problems in that space, and I'm just listing two here. So, you use data, obviously, to do machine learning. Where does this data come from, and are you allowed to use it? Do you violate any privacy laws, or are you building models that you use to make decisions about people? If you do that, then the general data protection regulation in the EU says you have to be able to explain to an individual if you're making a decision based on an algorithm or a machine, if this decision is of any kind of significant impact. That means, in machine learning, a lot of models are already out of the door because you can't do that. You can't explain why a certain decision comes out of a machine learning model if you use particular models.
Amazon's new Echo Show 5 has tough competition. For the past six months I've had a similar smart display, the Google Home Hub--recently renamed the Google Nest Hub--sitting on my bedside table. For better or legitimately worse, the virtual assistant living in the Google Nest Hub now knows me. My favorite photos automatically show up on its seven-inch display. When I set an alarm, it knows to go completely dark afterwards so I can sleep.
Thankfully, we've got technology on our side. A nearly endless parade of tools can not only help us remember things, but even get our brains working a bit more efficiently in general. Here are some free apps to help you ramp up your recall. Sometimes the best apps are the ones you already have. Both Android and Apple devices feature quick ways to set reminders for yourself, whether that means leveraging Siri, Google Assistant, or some other AI-powered helper.
Humans cannot compete with artificial intelligence when it comes to deconstructing big data. AI facilitates multiple ways to segment your audience to gain intelligent insights that allow retailers to personalise in a range of different ways. Buyers expect the'you may also like this' feature to show items that are relevant to their tastes. Personalised merchandising sorts the product display to show customers products that genuinely appeal to them. This can even include personalised navigation of the site, with a personalised home page, which is proven to increase conversions.
Years ago, a mobile app for email launched to immediate fanfare. Simply called Mailbox, its life was woefully cut short -- we'll get to that. Today, its founders are back with their second act: An AI-enabled assistant called Navigator meant to help teams work and communicate more efficiently. With the support of $12 million in Series A funding from CRV, #Angels, Designer Fund, SV Angel, Dropbox's Drew Houston and other angel investors, Aspen, the San Francisco and Seattle-based startup behind Navigator, has quietly been beta testing its tool within 50 organizations across the U.S. "We've had teams and research institutes and churches and academic institutions, places that aren't businesses at all in addition to smaller startups and large four-figure-person organizations using it," Mailbox and Navigator co-founder and chief executive officer Gentry Underwood tells TechCrunch. "Pretty much anywhere you have meetings, there is value for Navigator."
This seems like an obvious one, but with so many potential areas for AI exploration, starting with the right projects--and stakeholders--is crucial for long-term success. First and foremost, the process of identifying and selecting use cases shouldn't be driven by technology alone. That is, you don't want to think about AI solely in terms of where you can apply natural language processing, for example, or how you can leverage a labeled data set. Instead, ask where you seek to increase productivity or derive new value. Going through the questioning exercise above with the various leaders who may own or touch AI, such as the chief information officer, chief digital officer, chief data scientist, and other specialists (see #3), will enable you to identify where to start.
As deep learning has become ubiquitous, evaluations of its accuracy typically compare its performance against an idealized baseline of flawless human results that bear no resemblance to the actual human workflow those algorithms are being designed to replace. For example, the accuracy of real-time algorithmic speech recognition is frequently compared against human captioning produced in offline multi-coder reconciled environments and subjected to multiple reviews to generate flawless content that looks absolutely nothing like actual real-time human transcription. If we really wish to understand the usability of AI today we should be comparing it against the human workflows it is designed to replace, not an impossible vision of nonexistent human perfection. While the press is filled with the latest superhuman exploits of bleeding-edge research AI systems besting humans at yet another task, the reality of production AI systems is far more mundane. Most commercial applications of deep learning can achieve higher accuracy than their human counterparts at some tasks and worse performance on others.