If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Most companies are struggling to develop working artificial intelligence strategies, according to a new survey by cloud services provider Rackspace Technology. The survey, which includes 1,870 organizations in a variety of industries, including manufacturing, finance, retail, government, and healthcare, shows that only 20 percent of companies have mature AI/machine learning initiatives. The rest are still trying to figure out how to make it work. Lower costs, improved precision, better customer experience, and new features are some of the benefits of applying machine learning models to real-world applications. But machine learning is not a magic wand.
As advocates for facial recognition tout the tech's potential to track down the US Capitol rioters, a new Amnesty International campaign has provided a timely reminder of the software's dangers. The NGO has shared a stream of examples of how the software amplifies racist policing and threatens the right to protest -- and called for a global ban on the tech. The Ban the Scan campaign was launched on Tuesday in New York City, where facial recognition has been used 22,000 since 2017. Amnesty notes that the software is often prone to errors. But even when it "works," it can exacerbate discriminatory policing, violate our privacy, and threaten our rights to peaceful assembly and freedom of expression.
The graph represents a network of 1,050 Twitter users whose tweets in the requested range contained "iiot machinelearning", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Friday, 26 February 2021 at 12:17 UTC. The requested start date was Friday, 26 February 2021 at 01:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 2-day, 1-hour, 28-minute period from Tuesday, 23 February 2021 at 23:20 UTC to Friday, 26 February 2021 at 00:48 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.
Ask any seller of a highly complex and customizable chatbot or virtual agent system about cost and you're likely to get an evasive answer. 'There's no one-size fits all.' 'I'd need to talk to you on the phone to give you an accurate quote.' Increasingly, in this ever-saturating market, it's easy to find elements of chatbot pricing (i.e., API request fees) or flat monthly subscription costs for low-end systems, but who is giving the educated bot buyer a clear, top to bottom view of what it costs to build a system that will really work? By'really work,' I mean one that will materially contribute to cost savings, improve customer satisfaction, and maybe even generate new revenue. In other words, how much is it realistically going to cost to build a bot your customers will actually want to use. The truth is, building a successful chatbot is not purely a question of technology.
Google is making changes to how it reviews papers following an internal revolt over the company's controversial practices. Leading AI ethics researcher Timnit Gebru was fired from Google in December last year after sending an email to colleagues which criticised the company's practices. Gebru claims Google blocks the publication of papers that may cause criticism of the company's work; including her most recent which questioned whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities. In an email to employees following Gebru's firing, Jeff Dean, Head of Google Research, said: "Papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day's notice before its deadline -- we require two weeks for this sort of review -- and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn't meet our bar for publication and were given feedback about why."
GPT-3 has created a lot of buzz since its release a few months ago. The system can generate (almost) plausible conversations with the likes of Nietzsche, write op eds for The Guardian and was even used successfully to post undercover comments on Reddit for a week. But even with GPT-3, AI is still stuck in Uncanny Valley. GPT-3 output feels like it was written by a human at first glance, but it isn't quite. On closer inspection, it lacks substance and coherence.
The majority of organisations globally lack the internal resources to support critical artificial intelligence and machine learning initiatives, according to a new study from Rackspace Technology. The survey, Are Organisations Succeeding at AI and Machine Learning? "This study shines a light on the struggle to balance the potential benefits of AI and ML against the ongoing challenges of getting AI/ML initiatives off the ground," Rackspace says. "While some early adopters are already seeing the benefits of these technologies, others are still trying to navigate common pain points such as lack of internal knowledge, outdated technology stacks, poor data quality or the inability to measure ROI." Participants of the survey in the APJ region rated themselves slightly higher at 18% compared to global statistics a 17% for advanced maturity in AI/ML). APJ participants were more likely to be using AI/ML in more applications and use cases, and are spending significantly more on average than global participants ($1.3 million vs $1.06 million).
Year-by-year, traffic has only gotten worse in most cities across the world. This is particularly true for cities in Asia where the number of traffic congestions has grown exponentially due to rapid urbanization and increased median income. In the Indian capital of Delhi, for instance, drivers spend as much as 58% more time stuck in traffic compared to drivers in any other city in the world. In the face of this mounting economic, health, and environmental challenge, technology may be one of our best allies when it comes to reducing time spent in traffic. Expanding roadways, improving public transit, and encouraging alternative forms of mobility are definitely important and have their part to play in improving traffic.
It's hard to feel connected to someone who's gone through a static photo. So a company called MyHeritage who provides automatic AI-powered photo enhancements is now offering a new service that can animate people in old photos creating a short video that looks like it was recorded while they posed and prepped for the portrait. Called Deep Nostalgia, the resulting videos are reminiscent of the Live Photos feature in iOS and iPadOS where several seconds of video are recorded and saved before and after the camera app's shutter is pressed. But where Live Photos is intended to be used to find the perfect shot and framing that may have been missed the exact second the shutter was pressed, Deep Nostalgia is instead meant to bring still shots, even those not captured on a modern smartphone, to life. The conversion process is completely automated. Users simply need to upload a photograph through the MyHeritage website where it's first sharpened and enhanced to not only improve the quality of the final animation but to also make it easier for the deep learning algorithm (created by a company called D-ID) to do its thing.