If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Insurance companies should also be increasingly engaged in the governance of AI systems in the face of growing regulatory pressure. Every organization should have an AI governance platform to avoid the risk of violating privacy and data protection laws, being accused of discrimination or bias, or engaging in unfair practices. "As soon as a similar regulation or legislation is passed, organizations are placed in a precarious position because [lack of governance] can lead to fines, loss of market share, and bad press. Every business who uses AI needs to have this on their radar," said Marcus Daley (pictured), technical co-founder of NeuralMetrics. NeuralMetrics is an insurtech data provider that aids in commercial underwriting for property and casualty (P&C) insurers.
Type in a prompt like "a chocolate bar riding a bicycle in the style of Picasso," and artificial intelligence tools including DALL-E, Midjourney and Stable Diffusion can conjure an image for you in seconds. They do so by incorporating elements from the vast libraries of digitally available images and artwork from across the internet that they have been trained on. That question is at the heart of two new lawsuits. Last week, Seattle-based stock image giant Getty Images announced that it has initiated legal proceedings against Stability AI, the maker of Stable Diffusion. Getty alleges that the company has copied millions of its images and "[chosen] to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests."
I have a grayish dual position regarding generative art and, well, basically, generative creativity. One view is extremely cynical, and the other perspective is hopeful. I wrote earlier about this topic here (note: a bit gloomy). Let me start with the cynical view, hyperbolized for ease of communication. I see this as a big tech effort to lower tech wages, reduce negotiation positions of creative workers, push the commoditization of art, create a new scaleable consumer market, and more holistically drive society towards transhumanism.
Across the technology industry, artificial intelligence (AI) has boomed over the last year. Lensa went viral creating artistic avatar artwork generated from real-life photos. The OpenAI chatbot ChatGPT garnered praise as a revolutionary leap in generative AI with the ability to provide answers to complex questions in natural language text. Such innovations have ignited an outpouring of investments even as the tech sector continues to experience major losses in stock value along with massive job cuts. And there is no indication the development of these AI-powered capabilities will slow down from their record pace.
The continuing development of AI systems represents a profound achievement of the digital age that brings with it tremendous opportunities. In fact, many in the creative industry are already using or plan to use AI for the creation of a wide range of works that benefit society. But as with many advances in technology, these new opportunities come with challenges. This licensing activity is evidence of existing markets for TDM. It is important that the conditions of those licenses are respected and that they are not undermined by new exceptions that excuse unauthorized uses.
Futurist Bernard Marr recently boldly claimed on the Business Leader Podcast that'every single company on the planet will be an AI one in the future'. It's hard to know for sure if that will be the case, but it is certainly true that artificial intelligence is being used by more and more businesses to try to give them a cutting edge when it comes to productivity and output. In this article, we explore how companies are utilising this technology and what real-world impact it is having. The most common assumption when people talk about AI and machine learning is that jobs will end up being replaced by this technology and somehow people will end up being usurped or undermined by artificial intelligence. Andrew Tsonchev, VP of Technology at Darktrace, believes that whilst the technology is exciting, its outcomes are a little less apocalyptic. Darktrace were one of the early adopters in their sector and use AI to detect cyber security threats.
From "intelligent" vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence has burrowed its way into every arena of modern life. Its promoters reckon it is revolutionising human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions. Regulators in Europe and North America are worried. The European Union is likely to pass legislation next year- the AI Act- aimed at reining in the age of the algorithm. The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.
In this episode of Work in Progress, Gary Shapiro, president & CEO of the Consumer Technology Association (CTA), joins me to talk about the world's biggest tech event – the Consumer Electronics Show (CES) 2023 – underway this week in Las Vegas. More than 100-thousand people are expected at CES to get a look at what's ahead for us in 2023 and beyond from more than 1,000 exhibitors. "What we're going to see is the growth of so many categories such as digital health and EV (electric vehicles) and all the different transportation alternatives. It's one of the largest car shows in the world, along with the whole ecosystem and new technologies that are coming, which the car companies are relying on," says Shapiro. He says there will be a lot of focus on artificial intelligence, virtual reality, cybersecurity, food security, agriculture, sustainability, and entertainment.
"This year, the big milestone was having the board open its doors and start accepting claims," Perlmutter said, adding that board decisions will start coming in the next year. Though it is "still early days" and it remains unclear what the standard volume of claims will be, Perlmutter said she is "extremely impressed" with how well the board is doing. It's received over 260 cases so far. She added that several of the cases have been dismissed. The office believes that means they've been settled, which would adhere to the alternative dispute resolution mechanism of the board, she said. "We set up this totally new tribunal in really record time. I think most other agencies who have seen what we've done can't understand how we managed that in under a year and a half, because it required a lot of work," she said.
Ethical AI will need careful planting of many ecosystems. Ethical AI has been a concern of AI leaders, and practitioners for many years, but finally it seems, global jurisdictions are starting to move from policy formulation and stakeholder engagement to putting some teeth into drafting legal bills or acts. Expect many new laws to pass in 2023, tightening up citizen privacy and creating risk frameworks and audit requirements for data bias, privacy and security risks. At the same time, regulators are going to have to evolve an entire global ecosystem to ensure AI audits are effectively conducted and many questions loom as to who will validate certifications for AI audit practices and will we over burden AI innovations like we have done in so many other regulated operating practices that the risk and costs of non-conformance inhibit's innovation and capital funding? Finding a balance will be key.