In the US, today is Inauguration Day, and as Joe Biden prepares to take the oath as our 46th president, it's worth taking a look back at the discussions four years ago. Back then, the "most tech-savvy" president exited as all eyes turned to Donald Trump trading in his Android Twitter machine for a secure device. We know how things went after that. Donald Trump isn't tweeting anymore (at least not from his main accounts), and the country is struggling through a pandemic. The outgoing president just saw his temporary YouTube ban extended and, in one of his last official acts, pardoned Anthony Levandowski for stealing self-driving car secrets from Google's subsidiary Waymo.
The graph represents a network of 1,228 Twitter users whose tweets in the requested range contained "iiot ai", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Friday, 25 December 2020 at 11:39 UTC. The requested start date was Friday, 25 December 2020 at 01:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 2-day, 10-hour, 13-minute period from Tuesday, 22 December 2020 at 14:46 UTC to Friday, 25 December 2020 at 01:00 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.
As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.
AWS(Amazon Web Services) is the most popular and widely used cloud service provider. In 2017 AWS released its fully managed machine learning platform on cloud called Amazon Sagemaker, that allows developers to create, train and deploy their models quickly. In 2018, Amazon Sagemaker Ground Truth was launched to fully manage data labelling services for generating high-quality ground truth datasets to be trained into machine learning models. Ground Truth can integrate Amazon Mechanical Turk(the crowdsourcing platform) or internal data labelling team or external 3rd party vendors to get the labelling job done. Workflows can be customized or made use of built-in.
These are tricky topics to navigate but ones which many journalists are increasingly grappling with as tech stories become more mainstream. There have been some teething issues though. The classic example in 2015 was when NPR mapped the most common job in every US state using data derived from the Bureau of Labor Statistics. Truck drivers dominated the map. The issue is in the nuance of what'truck driver' means; the category includes anything from delivery drivers to those driving 16-wheel lorries.
Ride Vision, an Israeli startup that is building an AI-driven safety system to prevent motorcycle collisions, today announced that it has raised a $7 million Series A round led by crowdsourcing platform OurCrowd. YL Ventures, which typically specializes in cybersecurity startups but also led the company's $2.5 million seed round in 2018, Mobilion VC and motorcycle mirror manufacturer Metagal also participated in this round. The company has now raised a total of $10 million. In addition to this new funding round, Ride Vision also today announced a new partnership with automotive parts manufacturer Continental . "As motorcycle enthusiasts, we at Ride Vision are excited at the prospect of our international launch and our partnership with Continental," Uri Lavi, CEO and co-founder of Ride Vision, said in today's announcement.
This article was originally published by Federico Cheli on Cities Today, the leading news platform on urban mobility and innovation, reaching an international audience of city leaders. For the latest updates follow Cities Today on Twitter, Facebook, LinkedIn, Instagram, and YouTube, or sign up for Cities Today News. Artificial intelligence (AI) applications in the transport sector are stimulating innovations for better and more targeted use of vehicles and infrastructure. This could optimize network performances, support the monitoring and management of traffic, and create the base for solutions that pave the way to future mobility, particularly in cities. Infrastructure management and vehicle design are evolving thanks to the opportunities offered by the widespread use of devices, such as smartphones and in-vehicle localization sensors, for processing, gathering and exchanging information among users and service providers, as well as for monitoring and detecting the performance of vehicles and behavior of people. Altogether these create a huge amount of data (big data), which is the primary source for using AI in transport, allowing computers to perform activities for humans, such as driving.
Artificial intelligence (AI) is everywhere. In a typical day, people likely use AI multiple times without even knowing it: Alexa and Siri, Google Maps, Uber and Lyft, autopilot on commercial flights, spam filters, and smart email categorization (so anyone using Gmail, Yahoo, or Office 365/outlook), mobile check deposits, plagiarism checkers, online searches, personalized recommendations, Facebook, Instagram, and Pinterest are all examples of AI. But what happens when people are being introduced to a new AI technology? How likely are they to trust the new technology? With an interdisciplinary team of researchers from the University of Kansas, we set to find out.
The number of emergencies have increased over the years with the growth in urbanization. This pattern has overwhelmed the emergency services with limited resources and demands the optimization of response processes. It is partly due to traditional `reactive' approach of emergency services to collect data about incidents, where a source initiates a call to the emergency number (e.g., 911 in U.S.), delaying and limiting the potentially optimal response. Crowdsourcing platforms such as Waze provides an opportunity to develop a rapid, `proactive' approach to collect data about incidents through crowd-generated observational reports. However, the reliability of reporting sources and spatio-temporal uncertainty of the reported incidents challenge the design of such a proactive approach. Thus, this paper presents a novel method for emergency incident detection using noisy crowdsourced Waze data. We propose a principled computational framework based on Bayesian theory to model the uncertainty in the reliability of crowd-generated reports and their integration across space and time to detect incidents. Extensive experiments using data collected from Waze and the official reported incidents in Nashville, Tenessee in the U.S. show our method can outperform strong baselines for both F1-score and AUC. The application of this work provides an extensible framework to incorporate different noisy data sources for proactive incident detection to improve and optimize emergency response operations in our communities.
The outcomes of elections, product sales, and the structure of social connections are all determined by the choices individuals make when presented with a set of options, so understanding the factors that contribute to choice is crucial. Of particular interest are context effects, which occur when the set of available options influences a chooser's relative preferences, as they violate traditional rationality assumptions yet are widespread in practice. However, identifying these effects from observed choices is challenging, often requiring foreknowledge of the effect to be measured. In contrast, we provide a method for the automatic discovery of a broad class of context effects from observed choice data. Our models are easier to train and more flexible than existing models and also yield intuitive, interpretable, and statistically testable context effects. Using our models, we identify new context effects in widely used choice datasets and provide the first analysis of choice set context effects in social network growth.