If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Google on Wednesday announced a slew of features powered by Artificial Intelligence (AI), but a mistake in an ad caused its share price to tank. The search engine giant is rushing into the space after the bot ChatGPT caught the imagination of web users around the world with its ability to generate essays, speeches and even exam papers in seconds. Microsoft has announced a multibillion-dollar partnership with ChatGPT maker OpenAI and unveiled new products on Tuesday, while Google tried to steal the march a day earlier by announcing its "Bard" alternative. The bots are quickly being integrated into search engines and Google is battling to preserve its two-decade dominance of the web search industry. But astronomers on Twitter quickly noticed that Google's Bard had given out an error in an ad on Twitter touting its new technology.
Google parent Alphabet lost $100bn in market value on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries that it is losing ground to rival Microsoft. Alphabet shares, which slid as much as 9% during regular trading, were flat after hours. Microsoft shares rose about 3% before paring gains. They were also flat in post-market trading. Google's woes began after Reuters reported an error in its advertisement for chatbot Bard, which debuted on Monday.
Can you get secondhand embarrassment for an entire company? Because that's what watching Google's Live From Paris event felt like. On the heels of Microsoft's exciting news about the new AI-powered Bing, Google's Bard AI technology was reported to be inaccurate and faced event blunders. It's an uncharacteristic occurrence for the tech giant and underscored the "code red(Opens in a new tab)" Google declared internally following the meteoric popularity of ChatGPT. To be fair, the main purpose of Google's event was to announce new Google Maps and Google Search features and updates that have been in the works for a while.
Imagine your company just hired some hot new talent, a rising star in the executive suite so alluring that a rival firm just hired a lookalike. The buzz around them is intoxicating. Everyone seems to agree, from the CEO to the shareholders, this person is the future of the entire business. Then you learn the executive has what is politely termed a "hallucination problem(Opens in a new tab)." Every time they open their mouth, there's a 15 to 20 percent chance they might just make stuff up(Opens in a new tab). A professor at Princeton calls the guy a bullshit generator(Opens in a new tab).
Southwest Airlines Chief Operating Officer Andrew Watterson will apologize on Thursday before a U.S. Senate committee over the holiday meltdown that led to the cancellation of 16,700 flights and pledge changes to ensure that there will be no repeats. "Let me be clear: we messed up. In hindsight, we did not have enough winter operational resilience," Watterson's written testimony for a U.S. Senate Commerce Committee hearing seen by Reuters says. In other written testimony seen by Reuters, Southwest Airlines Pilots Association (SWAPA) President Casey Murray will tell the committee that the low-cost carrier's "overconfidence" in planning and a "systemic failure to provide modern tools" were responsible for the December meltdown that the union said stranded 2 million passengers and is estimated to have cost it more than $1 billion. Murray will tell the committee that pilots "have been sounding the alarm about (Southwest's) inadequate crew scheduling technology and outdated operational processes for years. Unfortunately, those warnings were summarily ignored."
Google and other major tech companies this week have been showcasing how conversational chatbots can help improve internet search. In one instance, however, Google may have inadvertently showcased the technology's shortcomings. Google on Monday tweeted a GIF demonstrating how one would use its new experimental AI chat service, Bard. Built on Google's own large language models (LLMs), Bard is designed to give users conversational answers to relatively complex questions. Also: What is ChatGPT and why does it matter?
The competition between the two tech giants reflects the excitement and hype around technology called generative AI, which uses massive computer programs trained on reams of text and images to build bots that conjure content of their own based on relatively complex questions. Google first unveiled its chatbot LaMDA in 2021, but didn't make it available to the public. Last year, smaller AI company OpenAI made its chatbot ChatGPT and image generator DALL-E available to the public, spurring a burst of interest in the technology, which in turn pushed Microsoft and Google to rush out their products.
Big Tech is scrambling to release products that can compete with OpenAI's ChatGPT, the sensational AI chatbot that is also the fastest-growing app ever. Google's entry, Bard, is set to be released "in the coming weeks(Opens in a new tab)" according to a Google blog post, but it's already nailing its impersonation of ChatGPT by generating inaccurate information. Google's blog entry about Bard contains an animated graphic designed to demo the Bard user experience, and, long story short, the AI in it wrongly claims the James Webb Space Telescope took the first ever picture of an exoplanet. In September of last year, Webb took its first picture of an exoplanet(Opens in a new tab), but that wasn't the first ever picture of any exoplanet -- a milestone that occurred in 2004(Opens in a new tab). It's not clear what happened, but one eyebrow-raising aspect of Bard's claim about James Webb is how recent it is.
For decades, Google has dominated two core elements of the way we use the internet: search engines and browsers. But the rise of new AI tools means those near-monopolies are looking shakier than they have in years. On Tuesday, Microsoft launched an attempt to dethrone Google, announcing that it would integrate the powerful AI behind the breakout chatbot ChatGPT into its rival search engine, Bing, and its web browser Edge. The technology will allow Bing to better answer more conversational queries that traditional search engines currently struggle with, Microsoft said in a blog post. Web search in its current form is "great for finding a website, but for more complex questions or tasks too often it falls short," the blog post said.
ChatGPT is a version of GPT-3, a large language model also developed by OpenAI. Language models are a type of neural network that has been trained on lots and lots of text. Because text is made up of sequences of letters and words of varying lengths, language models require a type of neural network that can make sense of that kind of data. Recurrent neural networks, invented in the 1980s, can handle sequences of words, but they are slow to train and can forget previous words in a sequence. In 1997, computer scientists Sepp Hochreiter and Jürgen Schmidhuber fixed this by inventing LTSM (Long Short-Term Memory) networks, recurrent neural networks with special components that allowed past data in an input sequence to be retained for longer. LTSMs could handle strings of text several hundred words long, but their language skills were limited.