If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A combined team of researchers from the University of British Columbia and the University of Alberta has found that at least some machine learning applications can learn from far fewer examples than has been assumed. In their paper published in the journal Nature Machine Intelligence, the group describes testing they carried out with machine learning applications created to predict certain types of molecular structures. Machine learning can be used in a wide variety of applications--one of the most well-known is learning to spot people or objects in photographs. Such applications typically require huge amounts of data for training. In this new effort, the researchers have found that in some instances, machine learning applications do not need such huge amounts of data to be useful.
With electric vehicles slowly gaining momentum toward becoming the dominant form of transportation in the U.S., two startups have struck up a partnership to help cities and utilities figure out where to put more car chargers. StreetLight Data, which sells transportation data to local governments, will offer Volta Charging's PredictEV tool to its customers. The tool uses AI to generate suggestions about where electric charging infrastructure would be most useful -- an urban planning consideration that is becoming more important as more electric vehicles hit the streets. Today, electric vehicles make up only around 2 percent of new vehicles sold in the U.S., but that number is rising rapidly. In 2020, Pew Research found that the number of EVs sold in the country had more than tripled since 2016.
The capabilities of GPT -3 has led to a debate between some as to whether or not GPT-3 and its underlying architecture will enable Artificial General Intelligence (AGI) in the future against those (many being from the school of logic and symbolic AI) who believe that without some form of logic there can be no AGI. The truth of the matter is that we don't know as we don't really fully understand the human brain. With science and engineering we work upon the basis of observation and testing. This section also addresses points raised by Esaú Flores. Gary Grossman in an article entitled Are we entering the AI Twilight Zone between AI and AGI? observed that in February 2020, Geoffrey Hinton, the University of Toronto professor who is a pioneer of Deep Learning, noted: "There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses." The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion).
All the sessions from Transform 2021 are available on-demand now. OpenAI today released Triton, an open source, Python-like programming language that enables researchers to write highly efficient GPU code for AI workloads. Triton makes it possible to reach peak hardware performance with relatively little effort, OpenAI claims, producing code on par with what an expert could achieve in as few as 25 lines. Deep neural networks have emerged as an important type of AI model, capable of achieving state-of-the-art performance across natural language processing, computer vision, and other domains. The strength of these models lies in their hierarchical structure, which generates a large amount of highly parallelizable work well-suited for multicore hardware like GPUs.
Recently the pandemic has pushed digital transformation to the front of the line. While collaborative tools allowed us to work from home and maintain close contact with our co-workers, the next step is just around the corner, thanks to artificial intelligence and machine learning. In every element of the company, the pandemic is driving a move towards a hybrid work paradigm, changing people's management and the way we work. Enterprises are on the verge of digital transformation and the use of artificial intelligence in HR departments will accelerate this process. Digital transformation improves the customer experience while also unlocking new value.
Microsoft has acquired Suplari, a Seattle-based vendor that provides "spend intelligence" information for managing supplier spending for an undisclosed amount. Microsoft announced the deal on July 28. Microsoft plans to bring together the Suplari Spend Intelligence Cloud with Microsoft Dynamics 365, its ERP/ CRM offering, which already includes a number of "insights" modules. Microsoft officials said Suplari helps companies transform data from sources like contracts, purchase orders, invoices, expenses and such into actionable insights. From Microsoft's blog post on the Suplari acquisition: "Together with Dynamics 365, the Suplari Spend Intelligence Cloud will help customers maximize financial visibility by using AI to automate the analysis of current data and historical patterns from multiple data sources. It will also help customers enhance financial decision-making by predicting the best spend management actions moving forward."
AI is hungry for data. Training and testing the machine-learning tools to perform desired tasks consumes huge lakes of data. More data often means better AI. Yet gathering this data, especially data concerning people's behavior and transactions, can be risky. For example, In January of this year, the US FTC reached a consent order with a company called Everalbum, a developer of photography apps.
Historically, the Department of Fair Employment and Housing (DFEH) has been highly selective in pursuing its own lawsuits. In California, individuals must lodge their complaint with the agency before filing a lawsuit against their employer. Typically the DFEH immediately grants them this right and reviews complaints for potential investigation, but it seldom pursues the cases itself. In 2019, the agency received 22,584 total complaints and filed four of its own cases. It filed 29 in 2018, following 20,822 complaints.
Graphics processing units from Nvidia are too hard to program, including with Nvidia's own programming tool, CUDA, according to artificial intelligence research firm OpenAI. The San Francisco-based AI startup, which is backed by Microsoft and VC firm Khosla ventures, on Wednesday introduced the 1.0 version a new programming language specially crafted to ease that burden, called Triton, detailed in a blog post, with the link to GitHub source code. OpenAI claims Triton can deliver substantial ease-of-use benefits over coding in CUDA for some neural network tasks at the heart of machine learning forms of AI such as matrix multiplies. "Our goal is for it to become a viable alternative to CUDA for Deep Learning," the leader of the effort, OpenAI scientist Philippe Tillet, told ZDNet via email. Triton "is for machine learning researchers and engineers who are unfamiliar with GPU programming despite having good software engineering skills," said Tillet.
Profession drone pilots needs to consider that they will be generating huge amounts of data in the form of photos and video. High quality images, along with 4K and even 5.4K video takes up a crazy amount of space, and if you don't plan for it right at the start, you're quickly going to get swamped by it. I've been a pro-am photographer for years and know just how quickly gigabytes can fill up, but even that didn't prepare me for getting into drone photography and videography. Must read: Why you need to urgently update all your iPhones, iPads, and Macs - NOW! There's are two aspects to handling the photos and video once they have been captured onto high-quality microSD cards (I only use SanDisk Pro or Extreme Pro cards from reputable suppliers -- cheap cards can't handle the data speeds needed for 4K and 5.4K, and fake cards are hugely unreliable). The first is ingesting the data off the cards, and the second is storage.